Marquee de Sells: Chris's insight outlet via ATOM 1.0 csells on twitter

You've reached the internet home of Chris Sells, who has a long history as a contributing member of the Windows developer community. He enjoys long walks on the beach and various computer technologies.




Blog Past and Future

This blog started as a single static page in 1995 as a set of links to provide to my students while I was teaching at DevelopMentor. I would like to show you a screenshot of that initial page, but as it turns out, the site predates the internet archive, so I can only show you what it looked like in 1998:

image

I guess I was doing some independent contracting at the time, because I was billing myself as a “Windows Object Architect,” whatever that is. BTW, I wouldn’t call that phone number if I were you – I don’t know who it will ring, but it won’t be me. The rest still works, however.

Posts

Over the years, I’ve done more or less blogging based on my current gig:

image

This post will be my 2,650th, with the peak in 2003.

Tweets

Now, I’m far more active on Twitter:

image

My first tweet was in October of 2009. I’d had an account for a while before that, but I just didn’t get it at first. Now I love it and have produced 3,053 tweets in 7 years. I find that while I like long-form writing a great deal, it’s much easier to find the time to turn a single thought into 140 characters then into 1400 words.

Blot

This is all coming up now because I’m busy moving to Blot, which gives me a chance to take a look back at all of this content I’ve generated. I love Blot because I can dump all of my old content into Dropbox in HTML fragment format (along with some per-file metadata) and Blot will produce a reasonable static site for me. By moving to the file system from a blogging API (AtomPub in my case), I can remove the need to use blogging tools (like Live Writer) and instead switch to any reasonable editor I want.

Further, since Blot supports all kinds of formats, I can move to Markdown for new content but not have to try to translate all of my HTML content, which is a lifesaver.

Unfortunately, the port to Blot is taking longer than I’d like for two reasons. The first is simply that David Merfield just didn’t anticipate some old guy dumping 20 years worth of blog content into his system, so there have been some problems. The good news is that David is extremely responsive. Every system has issues, but the measure of quality is how long it takes to go from issue reported to issue fixed and in the case of Blot, that time is sometimes days but often hours, which includes adding features specifically for my use case that he just hasn’t needed before. Highly recommended.

The Dead Web

The other reason that this translation is taking some time is that I’ve got a few link formats in my content and relied on IIS URL rewriting to keep them working. As I move to Blot, it’s easier to just fix the URLs as I extract the data from SQL Server (and I still use and love RegexD to figure out how to translate those URLs). As I do that, I’m testing for 404 links on my new site to make sure that I haven’t screwed anything up (I like Xenu's Link Sleuth for that work).

What I’m finding is that I’m fixing my own URLs but finding hundreds of links into the larger web that are broken. That’s just depressing. I work hard to keep my site running for anyone that wants the old data and I’ll be working with David on a URL forwarding scheme and 404 logging to keep external links working as I move to Blot. However, that doesn’t seem like an important goal for other folks.

Where Are We?

Still, I get to move to Blot and use whatever editor I want from whatever OS I want, so I’m a happy guy. Hopefully that happiness will translate into more blog posts, but if it doesn’t, I imagine I’ll still be spouting off on Twitter at the very least. Everyone needs a place to spout off sometimes.

0 comments




Handling Orientation Changes in Xamarin.Forms Apps

By default, Xamarin.Forms handles orientation changes for you automatically, e.g.

ss1[5]

ss4

Xamarin.Forms handles orientation changes automatically

In this example, the labels are above the text entries in both the portrait and the landscape orientation, which Xamarin.Forms can do without any help from me. However, what if I want to put the labels to the left of the text entries in landscape mode to take better advantage of the space? Further, in the general case, you may want to have different layouts for each orientation. To be able to do that, you need to be able to detect the device’s current orientation and get a notification when it changes. Unfortunately, Xamarin.Forms provides neither, but luckily it’s not hard for you to do it yourself.

Finding the Current Orientation

To determine whether you’re in portrait or landscape mode is pretty easy:

static bool IsPortrait(Page p) { return p.Width < p.Height; }

This function makes the assumption that portrait mode has a smaller width. This doesn’t work for all future imaginable devices, of course, but in the case of a square device, you’ll just have to take your changes I guess.

Orientation Change Notifications

Likewise, Xamarin.Forms doesn’t have any kind of a OrientationChanged event, but I find that handling SizeChanged does the trick just as well:

SizeChanged += (sender, e) => Content = IsPortrait(this) ? portraitView : landscapeView;

The SizeChanged event seems to get called exactly once as the user goes from portrait to landscape mode (at least in my debugging, that was true). The different layouts can be whatever you want them to be. I was able to use this technique and get myself a little extra vertical space in my landscape layout:

ss2

Using a custom layout to put the labels on the left of the text entries instead of on top 

Of course, I could use this technique to do something completely differently in each orientation, but I was hoping that the two layouts made sense to the user and didn’t even register as special, which Xamarin.Forms allowed me to do.

0 comments




Launching the Native Map App from Xamarin.Forms

My goal was to take the name and address of a place and show it on the native map app regardless of what mobile platform on which my app was running. While Xamarin.Forms provides a cross-platform API to launch the URL that starts the map app, the URL format is different depending on whether you’re using the Windows Phone 8 URI scheme for Bing maps, the Android Data URI scheme for the map intent or the Apple URL scheme for maps.

This is what I came up with:

public class Place {
  public string Name { get; set; }
  public string Vicinity { get; set; }
  public Geocode Location { get; set; }
  public Uri Icon { get; set; }
}
public void
LaunchMapApp(Place place) { // Windows Phone doesn't like ampersands in the names and the normal URI escaping doesn't help var name = place.Name.Replace("&", "and"); // var name = Uri.EscapeUriString(place.Name); var loc = string.Format("{0},{1}", place.Location.Latitude, place.Location.Longitude); var addr = Uri.EscapeUriString(place.Vicinity); var request = Device.OnPlatform( // iOS doesn't like %s or spaces in their URLs, so manually replace spaces with +s string.Format("http://maps.apple.com/maps?q={0}&sll={1}", name.Replace(' ', '+'), loc), // pass the address to Android if we have it string.Format("geo:0,0?q={0}({1})", string.IsNullOrWhiteSpace(addr) ? loc : addr, name), // WinPhone string.Format("bingmaps:?cp={0}&q={1}", loc, name) ); Device.OpenUri(new Uri(request)); }

This code was testing on several phone and tablet emulators and on 5 actual devices: an iPad running iOS 8, an iPad Touch running iOS 8, a Nokia Lumia 920 running Windows Phone 8.1, an LG G3 running Android 4.4 and an XO tablet running Android 4.1. As you can tell, each platform has not only it’s own URI format for launching the map app, but quirks as well. However, this code works well across platforms. Enjoy.

0 comments




App and User Settings in Xamarin.Forms Apps

Settings allow you to separate the parameters that configure the behavior of your app separate from the code, which allows you to change that behavior without rebuilding the app. This is handle at the app level for things like server addresses and API keys and at the user level for things like restoring the last user input and theme preferences. Xamarin.Forms provides direct support for neither, but that doesn’t mean you can’t easily add it yourself.

App Settings

Xamarin.Forms doesn’t have any concept of the .NET standard app.config. However, it’s easy enough to add the equivalent using embedded resources and the XML parser. For example, I built a Xamarin.Forms app for finding spots for coffee, food and drinks between where I am and where my friend is (MiddleMeeter, on GitHub). I’m using the Google APIs to do a bunch of geolocation-related stuff, so I need a Google API key, which I don’t want to publish on GitHub. The easy way to make that happen is to drop the API key into a separate file that’s loaded at run-time but to not check that file into GitHub by adding it to .gitignore. To make it easy to read, I added this file as an Embedded Resource in XML format:

image

Adding an XML file as an embedded resource makes it easy to read at run-time for app settings

I could’ve gone all the way and re-implemented the entire .NET configuration API, but that seemed like overkill, so I kept the file format simple:

<?xml version="1.0" encoding="utf-8" ?>
<config>
  <google-api-key>YourGoogleApiKeyHere</google-api-key>
</config>

Loading the file at run-time uses the normal .NET resources API:

string GetGoogleApiKey() {
  var type = this.GetType();
  var resource = type.Namespace + "." +
Device.OnPlatform("iOS", "Droid", "WinPhone") + ".config.xml"; using (var stream = type.Assembly.GetManifestResourceStream(resource)) using (var reader = new StreamReader(stream)) { var doc = XDocument.Parse(reader.ReadToEnd()); return doc.Element("config").Element("google-api-key").Value; } }

I used XML as the file format not because I’m in love with XML (although it does the job well enough for things like this), but because LINQ to XML is baked right into Xamarin. I could’ve used JSON, too, of course, but that requires an extra NuGet package. Also, I could’ve abstracting things a bit to make an easy API for more than one config entry, but I’ll leave that for enterprising readers.

User Settings

While app settings are read-only, user settings are read-write and each of the supported Xamarin platforms has their own place to store settings, e.g. .NET developers will likely have heard of Isolated Storage. Unfortunately, Xamarin provides no built-in support for abstracting away the platform specifics of user settings. Luckily, James Montemagno has. In his Settings Plugin NuGet package, he makes it super easy to read and write user settings. For example, in my app, I pull in the previously stored user settings when I’m creating the data model for the view on my app’s first page:

class SearchModel : INotifyPropertyChanged {
  string yourLocation;
  // reading values saved during the last session (or setting defaults)
  string theirLocation = CrossSettings.Current.GetValueOrDefault("theirLocation", "");
  SearchMode mode = CrossSettings.Current.GetValueOrDefault("mode", SearchMode.food);
  ...
}

The beauty of James’s API is that it’s concise (only one function to call to get a value or set a default if the value is missing) and type-safe, e.g. notice the use of a string and an enum here. He handles the specifics of reading from the correct underlying storage mechanism based on the platform, translating it into my native type system and I just get to write my code w/o worrying about it. Writing is just as easy:

async void button1_Clicked(object sender, EventArgs e) {
  ...

  // writing settings values at an appropriate time
  CrossSettings.Current.AddOrUpdateValue("theirLocation", model.TheirLocation);
  CrossSettings.Current.AddOrUpdateValue("mode", model.Mode);

  ...
}

My one quibble is that I wish the functions were called Read/Write or Get/Set instead of GetValueOrDefault/AddOrUpdateValue, but James’s function names make it very clear what’s actually happening under the covers. Certainly the functionality makes it more than worth the extra characters.

User Settings UI

Of course, when it comes to building a UI for editing user settings at run-time, Xamarin.Forms has all kinds of wonderful facilities, including a TableView intent specifically for settings (TableIntent.Settings). However, when it comes to extending the platform-specific Settings app, you’re on your own. That’s not such a big deal, however, since only iOS actually supports extending the Settings app (using iOS Settings Bundles). Android doesn’t support it at all (they only let the user configure things like whether an app has permission to send notifications) and while Windows Phone 8 has an extensible Settings Hub for their apps, it’s a hack if you do it with your own apps (and unlikely to make it past the Windows Store police).

Where Are We?

So, while Xamarin.Forms doesn’t provide any built in support for app or user settings, the underlying platform provides enough to make implementing the former trivial and the Xamarin ecosystem provides nicely for the latter (thanks, James!).

Even more interesting is what Xamarin has enabled with this ecosystem. They’ve mixed their very impressive core .NET and C# compiler implementation (Mono) with a set of mobile libraries providing direct access to the native platforms (MonoTouch and MonoDroid), added a core implementation of UI abstraction (Xamarin.Forms) and integration into the .NET developer’s IDE of choice (Visual Studio) together with an extensible, discoverable set of libraries (NuGet) that make it easy for 3rd party developers to contribute. That’s a platform, my friends, and it’s separate from the one that Microsoft is building. What makes it impressive is that it takes the army of .NET developers and points them at the current hotness, i.e. building iOS and Android apps, in a way that Microsoft never could. Moreover, because they’ve managed to also support Windows Phone pretty seamlessly, they’ve managed to get Microsoft to back them.

We’ll see how successful Xamarin is over time, but they certainly have a very good story to tell .NET developers.

0 comments




Microsoft Fan Boy Goes To Google

In 1992, I was a Unix programmer in Minneapolis. I’d graduated with a BS in Computer Science from the University of MN a year earlier and had written my programming assignments in C and C++ via first a VT100 terminal and then a VT100 terminal emulator on my Mac (running System 7, if you’re curious). My day job was at an AT&T VAR building multi-user voice response systems on Unix System V. My favorite editor was vi (not vim) and, like all good vi programmers, I hated emacs with a white hot passion.

Being bored with my current job, I posted my resume on the internet, which meant uploading it in ASCII text to an FTP site where tech companies knew to look for it. The tech company that found it was Intel. To prepare for my interview in Portland, OR, I went to play with a Windows 3.1 machine that someone had set up in the office, but nobody used. I had a Mac at home and Unix at work and for the 10 minutes that I could stand to use it, Windows 3.1 seemed like the worst of both. In spite of my distaste, Intel made an offer I couldn’t refuse and I found myself moving with my new wife to a new city for a new job and a new technology stack.

The move to Intel started my love affair with Windows (starting with Windows 95, of course, let’s be reasonable). Over the years, I grew to love Word, Excel, Visio, PowerPoint, Outlook, Live Writer, Skype, Windows XP, Windows 7, COM, ATL, .NET, C# and of course the Big Daddy for Windows developers: Visual Studio. Not only did I become a Windows fan boy (I can’t tell you how lonely it is to own a Windows Phone after the iPhone was released), but I became I contributing member of the Windows community, accounting for nearly 100% of the content on this web site, first published in 1995 solely to provide links to my DevelopMentor students, but growly steadily since (over 2600 posts in 20 years). Add to that to more than a dozen books and countless public speaking engagements, magazine articles and internet appearances and you’ve got a large investment in the Windows technology stack.

Of course, as I take on roles beyond consultant, speaker, author and community PM, I contribute less and less (although I do love spouting off into my twitter feed). Even so, I’ve been a regular attendee to Windows-related events and 90% of my friends are also Windows developers, so the idea of leaving not just a technology ecosystem but an entire community behind is a pretty daunting one.

And then, about 45 days ago, Google came knocking with an offer I couldn’t refuse. A few days after that, before I’ve even officially accepted the offer, I find myself in a bidding war for a house in Kirkland, WA that the wife and I both love (which almost never happens). So, for the first time since 1992, with my three boys graduated from high school, I find myself moving with my new wife to a new city for a new job and a new technology stack. As I write this, it’s the Friday before my Noogler orientation week (New Googler – get it?). I’ll be working on tools for Google cloud developers, which matches my Windows experience helping developers build distributed systems, although there’s going to be a huge learning curve swapping in the details.

After 20 years with Visual Studio, I don’t know if my fingers still know vi, but I can’t wait to find out. If I get a beer or two in me, I might even give emacs another try…

0 comments




Future Proof Your Technical Interviewing Process: Hiring or Not

This is the last in a 4-part series on how to interview well. Parts 1-3 covered the phone screen, the technical interview and the fit interviews. In his part, we’ll wrap up by talking about how to make the hiring decision.

Make Time For Questions

As important as what questions you ask the candidate are leaving time for them to ask their questions. Remember that they're interviewing you, too. Be open and honest about the answers; technical people have a sensitive bullshit detector, so don't try to pretend that everything is perfect; they’ll know if you’re not being sincere. However, it’s a fine line. If you find yourself dwelling on the negative, you have to wonder if you've found a good fit for yourself.

Also, don't forget to factor their questions into your own thinking about the candidate. The questions they ask about a job and a team they're going to be spending 40+ hours/week with is as good an indicator of how they think as anything else.

Making the Call

As you pass the interview candidate from person to person, make sure that you spend a few minutes in private with the next interviewer talking about what you heard that you liked as well as things you'd like them to circle back on. You want to give them an opportunity to try again, either to convince you it's not an issue or to confirm that it is.

Every interviewer should share their thoughts about the candidate soon while they're fresh. You can send an email around to the team as you finish or get together in the same room after the candidate has headed home, but it should be the same day; those first impressions matter.

Ultimately each interviewer will provide three pieces of information: a thumbs up/down (whether you use actual thumbs for this process is up to you : ), a confidence level (do you really love this person? are you on the fence?) and an explanation (“I loved how they think about the customer!” or “They never figured out how to efficiently search an infinite space of possible solutions.”)

The set of interview results will come out in three ways:

  1. Everyone loved that candidate. Hire them.
  2. Everyone hated the candidate. Don't hire them. Be polite!
  3. There's a mix. Discuss. Potentially get more info.

Of course, options #1 and #2 are easy to deal with. Unfortunately, option #3 is where most candidates fall. The question is, what do you do with a candidate with mixed results? If you're following the principle that it's better to send a good candidate away then to hire a bad candidate, then you'll pass on them. However, you'll want to spend some extra time on candidates like these. Discuss it amongst the team. See how adamant the thumbs up voters are and why. See how adamant the thumbs down voters are and why. If the candidate is on the fence but leaning towards "hire," pick someone else to talk to them and/or get them into a different environment, e.g. the bar down the street or the bowling alley at the company Xmas party, and see how they do.

Ultimately, it boils down to one thing: does the team as a whole want to bring the candidate into the team? If so, great. If not, let them go. Certainly a senior member of the company or department can override the team and hire a candidate above their objections, but I wouldn't recommend it. You're much more likely to hurt a good team in those situations then to help it.

Where Are We?

Whether you agree with the specifics of this process or not, I encourage you to spend the time to really examine your process. You want the team you build to be more than the sum of the parts, but that kind of magic requires first that you have great parts.

0 comments




Future Proof Your Technical Interviewing Process: The Fit Interviews

If you just found yourself here, you’ve stumbled onto a multi-part series on the technical interviewing process. Part 1 covered the phone screen and part 2 covered the technical interview. Today we’re going to discuss the “fit” interviews, that is, team and cultural fit.

The Team Fit Interview

Modern software development is done in teams. You want to be able to judge any candidate as a productive, positive member of your team. They don't necessarily have to have experience doing things the way you do them, but they should show the ability to adapt when issues arise. Your job in the team fit interview is to break the important things that happen in your team into situations that you can ask your candidate about. The following are pretty standard examples:

However, you have to be careful here. Pretty much anyone can give you the "right" answers to these questions, but you don't want the "right" answers – you want the real answers. How does a candidate actually behave in the face of these situations?

The best way I know of to get the real answers out of someone is something called Behavioral Interviewing. The idea is simple: instead of asking someone how they would act if faced with a certain situation, ask them to describe an example in their past when they've had to deal with that situation. Discuss it with them. How did their strategy work for them? What did they learn? What would they do differently?

Just this one shift from “how would you deal with this situation” to “how did you deal with this situation” will get you a much deeper look into how a candidate actually behaves, which allows you to decide if they're a good fit for your team.

The Cultural Fit Interview

This goal of the cultural fit interview is to figure out if the candidate will like their new working environment and whether the team will be glad to have them. It's enormously important and very difficult to access. One typical way to approach this type of interview is to ask the following kinds of questions, also in a Behavioral Interviewing style:

These questions are much more vague and really meant to start a conversation, but they're also very hit-and-miss. If you happen to hit the right path, you can really crack a candidate open like a ripe nut.

Also, you want to be careful how you interpret the answers. If you don't filter out people that aren't a good fit for the culture of the company, they'll be unhappy and you'll be unhappy. On the other hand, if you filter too much, you'll lose out on the benefits of diversity. It's a hard line to walk.

Another way to approach a culture fit interview is to get creative. Maybe invite the person to a company event, perhaps a semi-public mixer or a Friday afternoon beer bash. Maybe sit down with the team over lunch and play a game together. Maybe sit in the café and grab lunch in a small group and see how the conversation goes.

I think the key to finding a good fit culturally is to spend time with the candidate that doesn’t center around the technology you’re using to build your products. For example, involving a candidate in something that the team does for fun can go a long way towards finding a great new member for your team.

Next Time

Tune in next time for when we wrap this series up and talk about how to make the hiring decision.

0 comments




Future Proof Your Technical Interviewing Process: The Technical Interview

It’s incredibly important to interview well as you’re building your technical team. Further, interviewing well is hard to do and, like anything, you only get out of it what you put into it. In part 1 of this series, we discussed the phone screen. In this part, we’ll discuss the technical interview.

The Technical Interview

The only way to really know if someone can deliver technically is to give them a problem to solve and watch them solve it. You can do this with simple data structure problems on the whiteboard, test questions on paper, algorithm problems in notepad, real-world problems with some pair programming or puzzle problems with them waving their hands wildly in the air. In a technical interview, you should encourage the candidate to think out loud, because you care more about how they go about solving the problem then actually getting to an answer. You will look for the following things:

This last one is the one I tend to focus on the most. Even more important than a candidate having knowledge of the technologies you're going to ask them to use is their ability to understand new technologies over time.

My father always says that while teenage drivers hopped up on testosterone may get into the most accidents, they're the ones that push the cars to see what they will do. You want to hire engineers that have pushed technologies past their limits for the pure joy of it. Those are going to be the ones that build the deep knowledge and can adapt in the future to whatever comes their way.

I filter for deep understanding by not just digging into not only the "how" of whatever they claim to know best, but also the "why." They may know how to build a factory in Angular, but do they understand what a factory is and why Angular does it that way? They may know how to manage their resources in the face of the JVM's garbage collector, but do they know why we use garbage collection and what the downsides are? Do they understand what canvas is good for, what SVG is good for and when to choose which?

The key here is that past behavior indicates future behavior – if they're developed deep understanding of the technologies they've learned before, chances are pretty good that they're going to be able to do that for the new technologies your team adopts in the future. There is no better way to understand how well they’re going to do on future technical challenges than hearing how they’ve handled such challenges in the past and seeing how they do it right in front of you.

What’s Next in This Series

However, the technical fit is not the only thing you need to look for – you also want to make sure that they will fit in well on your team and the company culture overall. We’ll talk about these in the next piece in this series.

0 comments




Future Proof Your Technical Interviewing Process: The Phone Screen

In 30 years, I've done a lot of interviewing from both sides of the table. Because of my chosen profession, my interviewing has been for technical positions, e.g. designers, QA, support, docs, etc., but mostly for developers and program managers, both of which need to understand a system at the code level (actually, I think VPs and CTOs need to understand a system at the code level, too, but the interview process for those kinds of people is a superset of what I'll be discussing in this series).

In this discussion, I'm going to assume you've got a team doing the interview, not just a person. Technical people need to work well in teams and you should have 3-4 people in the interview cycle when you're picking someone to join the team.

The Most Important Thing!

Let me state another assumption: you care about building your team as much as you care about building your products. Apps come and go, but a functional team is something you want to cherish forever (if you can). If you just want to hire someone to fill a chair, then what I'm about to describe is not for you.

The principle I pull from this assumption is this: it's better to let a good candidate go then to hire a bad one.

A bad hire can do more harm than a good hire can repair. Turning down a "pretty good" candidate is the hardest part of any good interview process, but this one principle is going to save you more heartache than any other.

The Phone Screen

So, with these assumptions in mind, the first thing you always want to do when you've got a candidate is to have someone you trust do a quick phone screen, e.g. 30 minutes. This can be an HR person or someone that knows the culture of the company and the kind of people you're looking for. A phone screen has only one goal: to avoid wasting the team's time. If there's anything that's an obvious mismatch, e.g. you require real web development experience, but the phone screen reveals that the candidate really doesn’t, then you say "thanks very much" and move on to the next person.

If it's hard to get a person to come into your office -- maybe they're in a different city -- you'll also want to add another 30 minutes to do a technical phone screen, too, e.g.

Whatever it is, you want to make reasonably sure that they're going to be able to keep up with their duties technically before you bring them on site, or you’re just wasting the team’s time.

At this point, if you're hiring a contractor, you may be done. Contractors are generally easy to fire, so you can bring them on and let them go easily. Some companies start all of their technical hires as contractors first for a period of 30-90 days and only hire them if that works out.

If you’re interviewing for an FTE position, once they’ve passed the phone screen, you're going to bring them into the office.

You should take a candidate visit seriously; you're looking for a new family member. Even before they show up, you make sure you have a representative sample of the team in the candidate's interview schedule. At the very least, you need to make sure that you have someone to drill into their technical abilities, someone to deal with their ability to deliver as part of a team and someone to make sure that they're going to be a cultural fit with the company as a whole. Each of these interview types is different and deserves it's own description.

Future Posts in This Series

Tune in to future posts in this series where we’ll be discussing:

0 comments




Head of Google interviewing says “results matter, riddles don’t”

googleGoogle, like Microsoft, is famous for asking brain-teaser style questions during their interviews. However, in a June, 2013 interview with the New York Times, Laszlo Bock, the Sr. VP of HR for Google, said that

“[B]ainteasers are a complete waste of time. How many golf balls can you fit into an airplane? How many gas stations in Manhattan? A complete waste of time. They don’t predict anything. They serve primarily to make the interviewer feel smart.”

In another interview, Bock said that when putting together a resume, focus on what you did in relation to the expectations:

“The key is to frame your strengths as: ‘I accomplished X, relative to Y, by doing Z.’ Most people would write a résumé like this: ‘Wrote editorials for The New York Times.’ Better would be to say: ‘Had 50 op-eds published compared to average of 6 by most op-ed [writers] as a result of providing deep insight into the following area for three years.’ Most people don’t put the right content on their résumés.”

Amen!

0 comments




Moving My ASP.NET Web Site to Disqus

I’m surprised how well that my commentRss proposal has been accepted in the world. As often as not, if I’m digging through an RSS feed for a site that supports comments, that site also provides a commentRss element for each item. When I proposed this element, my thinking was that I could make a comment on an item of interest, then check a box and I’d see async replies in my RSS client, thereby fostering discussion. Unfortunately, RSS clients never took the step of allowing me to subscribe to comments for a particular item and a standard protocol for adding a comment never emerged, which made it even less likely for RSS clients to add that check box. All in all, commentRss is a failed experiment.

Fostering Discussion in Blog Post Comments

However, the idea of posting comments to a blog post and subscribing to replies took off in another way. For example, Facebook does a very good job in fostering discussion on content posted to their site:

image

The Facebook supports comments and discussions nicely

 

Not only does Facebook provide a nice UI for comments, but as I reply to comments that others have made, they’re notified. In fact, as I was taking the screenshot above, I replied to Craig’s comment and within a minute he’d pressed the Like button, all because of the support Facebook has for reply notification.

However, Facebook commenting only works for Facebook content. I want the same kind of experience with my own site’s content. For a long time, I had my own custom commenting system, but the bulk of the functionality was around keeping spam down, which was a huge problem. I recently dumped my comments to an XML format and of the 60MB of output, less than 8MB were actual comments – more than 80% was comment spam. I tried added reCAPTCHA and eventually email approval of all comments, but none of that fostered the back-and-forth discussions over time because I didn’t have notifications. Of course, to implement notifications, you need user accounts with email verification, which was a whole other set of features that I just never got around to implementing. And even if I did, I would have taken me a lot more effort to get to the level of quality that Disqus provides.

Integrating Disqus Into Your Web Site

Disqus provides a service that lets me import, export and manage comments for my site’s blog posts, the UI for my web site to collect and display comments and the notification system that fosters discussions. And they watch for spam, too. Here’s what it looks like on a recent post on my site:

image

The Disqus services provides a discussion UI for your web site

 

Not only does Disqus provide the UI for comments, but it also provides the account management so that commenters can have icons and get notifications. With the settings to allow for guest posting, the barrier to entry for the reader that wants to leave a comment is zero. Adding the code to enable it on your site isn’t zero, but it’s pretty close. Once you’ve established a free account on disqus.com, you can simply create a forum for your site and drop in some boilerplate code. Here’s what I added to my MVC view for a post’s detail page to get the discussion section above:

<%-- Details.aspx –%>
...
<asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server">
  <!-- post –>
  ...
  <h1><%= Model.Post.Title %></h1>
  <div><%= Model.Post.Content %></div>
  <!-- comments -->
  <div id="disqus_thread"></div>
  <script type="text/javascript">
    var disqus_shortname = "YOUR-DISQUS-SITE-SHORTNAME-HERE";
    var disqus_identifier = <%= Model.Post.Id %>;
    var disqus_title = "<%= Model.Post.Title %>";

    /* * * DON'T EDIT BELOW THIS LINE * * */
    (function () {
      var dsq = document.createElement('script'); dsq.type = 'text/javascript'; dsq.async = true;
      dsq.src = '//' + disqus_shortname + '.disqus.com/embed.js';
      (document.getElementsByTagName('head')[0] || document.getElementsByTagName('body')[0]).appendChild(dsq);
    })();
  </script>
</asp:Content>

The discussion section for any post is just a div with the id set to “disqus_thread”. The code is from the useful-but-difficult-to-find Disqus universal embed code docs. The JavaScript added to the end of the page creates a Disqus discussion control in the div you provide using the JS variables defined at the top of the code. The only JS variable that’s required is the disqus_shortname, which defines the Disqus data source for your comments. The disqus_identifier is a unique ID associated with the post. If this isn’t provided, the URL for the page the browser is currently showing will be used, but that doesn’t work for development mode from localhost or if the comments are hosted on multiple sites, e.g. a staging server and a production server, so I recommend setting disqus_identifier explicitly. The disqus_title will likewise be taken from the current page’s title, but it’s better to set it yourself to make sure it’s what you want.

And that’s it. Instead of tuning your UI in the JS code, you do so in the settings on disqus.com yourself an includes things like the default order of comments, the color scheme, how much moderation you want, etc.

There’s one more page on your site where you’ll want to integrate Disqus: the page the provides the list of posts along with the comment link and comment count:

image

Disqus will add comment count to your comment lists, too

 

Adding support for the comment count is similar to adding support for the discussion itself:

<%-- Index.aspx --%>
...
<asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server">
  ...
  <!-- post -->
  <h2><%= Html.ActionLink(post.Title, "Details", "Posts", new { id = post.Id }, null) %></h2>
  <p><%= post.Content %></p>
  <!-- comment link --> 
  <p><%= Html.ActionLink("0 comments", "Details", "Posts", null, null, "disqus_thread",
new RouteValueDictionary(
new { id = post.Id }),
new Dictionary<string, object>() { { "data-disqus-identifier", post.Id } }) %></p> ...
  <script type="text/javascript">
    // from: https://help.disqus.com/customer/portal/articles/565624
    var disqus_shortname = "sellsbrothers";

    /* * * DON'T EDIT BELOW THIS LINE * * */
    (function () {
      var s = document.createElement('script'); s.async = true;
      s.type = 'text/javascript';
      s.src = 'http://' + disqus_shortname + '.disqus.com/count.js';
      (document.getElementsByTagName('HEAD')[0] || document.getElementsByTagName('BODY')[0]).appendChild(s);
    }());
  </script>
</asp:Content>

Again, this code is largely boilerplate and comes from the Disqus comment count docs. The call to Html.ActionLink is just a fancy way to get an A tag with an href of the following format:

<a href="/Posts/Details/<<POST-ID>>#disqus_thread" data-disqus-identifier="<<POST-ID>>">0 comments</a>

The “disqus_thread” tag at the end of the href does two things. The first is that it provides a link to the discussion portion of the details page so that the reader can scroll directly to the comments after reading the post. The second is that it provides a tag for the Disqus JavaScript code to change the text content of the A tag to show the number of comments.

The “data-disqus-identifier” attribute sets the unique identifier for the post itself, just like the disqus_identifier JS variable we saw earlier.

The A tag text content that you provide will only be shown if Disqus does not yet know about that particular post, i.e. if there are no comments yet, then it will leave it alone. However, if Disqus does know about that post, it will replace the text content of the A tag as per your settings, which allow you to be specific about how you want 0, 1 and n comments to show up on your site; “0 comments”, “1 comment” and “{num} comments” are the defaults.

Importing Existing Comments into Disqus

At this point, your site is fully enabled for Disqus discussions and you can deploy. In the meantime, if you’ve got existing comments like I did, you can import them using Disqus’s implementation of the WordPress WXR format, which is essentially RSS 2.0 with embedded comments. The Disqus XML import docs describe the format and some important reminders. The two reminders they list are important enough to list again here:

 

The XML import docs do a good job showing what the XML format is as an example, but they only list one of the data size requirements. In my work, I found several undocumented limits as well:

Something else to keep in mind is that as part of the comment import process is that Disqus translates the XML data into JSON data, which makes sense. However, they report their errors in terms of the undocumented JSON data structure, which can be confusing as hell. For example, I kept getting a “missing or invalid message” error message along with the JSON version of what I thought the message was to which they were referring. The problem was that by “message”, Disqus didn’t mean “the JSON data packet for a particular comment,” they meant “the field called ‘message’ in our undocumented JSON format which is mapped from the comment_content element of the XML.” I went round and round with support on this until I figured that out. Hopefully I’ve saved future generations that trouble.

If you’re a fan of LINQPad or C#, you can see the script I used to pull the posts and comments out of my site’s SQL Server database (this assumes an Entity Framework mapping in a separate DLL, but you get the gist). The restrictions I mention above are encapsulated in this script.

Where Are We?

Even though my rssComments extension to the RSS protocol was a failed experiment, the web has figured out how to foster spam-free, interactive discussions with email notifications across web sites. The free Disqus service provides as implementation of this idea and it does so beautifully. I wish importing comments was as easy as integrating the code, but since I only had to do it once, the juice was more than worth the squeeze, as a dear Australian friend of mine likes to say. Enjoy!

0 comments




Moving My Site to Azure: DNS & SSL

This part 3 of a multi-part series on taking a real-world web site (mine) written to be hosted on an ISP (securewebs.com) and moving it to the cloud (Azure). The first two parts talked about moving my SQL Server instance to SQL Azure and getting my legacy ASP.NET MVC 2 code running inside of Visual Studio 2013 and published to Azure. In this installment, we’ll discuss how I configured DNS and SSL to work with my shiny new Azure web site.

Configuring DNS

Now that I have my site hosted on http://sellsbrothers.azurewebsites.net, I’d like to change my DNS entries for sellsbrothers.com and www.sellsbrothers.com to point to it. For some reason I don’t remember, I have my domain’s name servers pointed at microsoftonline.com and I used Office365 to manage them (it has something to do with my Office365 email account, but I’m not sure why that matters…). Anyway, in the Manage DNS section of the Office365 admin pages, there’s a place to enter various DNS record types. To start, I needed to add two CNAME records:

image

The CNAME records needed to be awarded an IP address by Azure

 

A CNAME record is an alias to some other name. In this case, we’re aliasing the awveryify.sellsbrothers.com FQDN (the Host name field is really just the part to the left of the domain name to which you’re adding records, sellsbrothers.com in this case). This awverify string is just a string that Azure needs to see before it will tell you the IP address that it’s assigned to you as a way to guarantee that you do, in fact, own the domain. The www host name maps to the Azure web site name, i.e. mapping www.sellsbrothers.com to sellsbrothers.azurewebsites.net. The other DNS record I need is an A record, which maps the main domain, i.e. sellsbrothers.com, to the Azure IP address, which I’ll have to add later once Azure tells me what it is.

After adding the awverify and www host names and waiting for the DNS changes to propagate (an hour or less in most cases), I fired up the configuration screen for my web site and chose the Manage Custom Domains dialog:

image

Finding the IP address to use in configuring your DNS name server from Azure

 

Azure provided the IP address after entering the www.sellsbrothers.com domain name. With this in hand, I needed to add the A record:

image

Adding the Azure IP address to my DNS name servers

 

An A record is the way to map a host name to an IP address. The use of the @ means the undecorated domain, so I’m mapping sellsbrothers.com to the IP address for sellsbrothers.azurewebsites.net.

Now, this works, but it’s not quite what I wanted. What I really want to do, and what the Azure docs hint at, is to simply have a set of CNAME records, including one that maps the base domain name, i.e. sellsbrothers.com, to sellsbrothers.azurewebsites.net directly and let DNS figure out what the IP address is. This would allow me to tear down my web server and set it up again, letting Azure assign whatever IP address it wanted and without me being required to update my DNS A record if I ever need to do that. However, while I should be able to enter a CNAME record with a @ host name, mapping it to the Azure web site domain name, Office365 the DNS management UI won’t let me do it and Office365 support wasn’t able to help.

However, even if my DNS records weren’t future-proofed the way I’d like them to be, they certainly worked and now both sellsbrothers.com and www.sellsbrothers.com mapped to my new Azure web site, which is where those names are pointing as I write this.

However, there was one more feature I needed before I was done ported my site to Azure: secure posting to my blog, which requires an SSL certificate.

Configuring Azure with SSL

Once I had my domain name flipped over, I had one more feature I needed for my Azure-hosted web site to be complete – I needed to be able to make posts to my blog. I implemented the AtomPub publishing protocol for my web site years ago, mostly because it was a protocol with which I was very familiar and because it was one that Windows Live Writer supports. To make sure that only I could post to my web site, I needed to make sure that my user name and password didn’t transmit in the clear. The easiest way to make that happen was to enable HTTPS on my site using an SSL certificate. Of course, Azure supports HTTPS and SSL and the interface to make this happen is simple:

image

Azure’s certificate update dialog for added an SSL cert to your web site

 

Azure requires a file in the PKCS #12 format (generally using the .pfx file extension), which can be a container of several security-related objects, including a certificate. All of this is fine and dandy except that when you go to purchase your SSL cert, you’re not likely to get the file in pfx format, but in X.509 format (.cer or .crt file format). To translate the .crt file into a .pfx file, you need to generate a Certificate Signing Request (.csr) file with the right options so that you keep the private key (.key) file around for the conversion. For a good overview of the various SSL-related file types, check out Kaushal Panday’s excellent blog post.

Now, to actual dig into the nitty gritty, first you’re going to have to choose an SSL provider. Personally, I’m a cheapskate and don’t do any ecommerce on my site, so my needs were modest. I got myself a RapidSSL cert from namecheap.com that only did domain validation for $11/year. After making my choice, the process went smoothly. To get started, you pay your money and upload a Certificate Signing Request (.crs file). I tried a couple different ways to get a csr file, but the one that worked the best was the openssl command line tool for Windows. With that tool installed and a command console (running in admin mode) at the ready, you can follow along with the Get a certificate using OpenSSL section of the Azure documentation on SSL certs and be in good shape.

Just one word of warning if you want to follow along with these instructions yourself: There’s a blurb in there about including intermediate certificates along with the cert for your site. For example, when I get my RapidSSL certificate, it came with a GeoTrust intermediate certificate. Due to a known issue, when I tried to include the GeoTrust cert in my chain of certificates, Azure would reject it. Just dropping that intermediate cert on the floor worked for me, but your mileage may vary.

Configuring for Windows Live Writer

Once I have my SSL uploaded to Azure, now I can configure WLW securely for my new Azure-hosted blog:

image

Adding a secure login for my Azure-hosted blog

 

You’ll notice that I use HTTPS as the protocol to let WLW know I’d like it to use encrypted traffic when it’s transmitting my user name and password. The important part of the rest of the configuration is just about what kind of protocol you’d like to use, which is AtomPub in my case:

image

Configuring WLW for the AtomPub Publishing protocol

 

If you’re interested in a WLW-compatible implementation of AtomPub written for ASP.NET, you can download the source to my site from github.

Where are we?

Getting your site moved to Azure from an ISP involves more than just making sure you can deploy your code – it also includes making sure your database will work in SQL Azure and configuring your DNS and SSL settings as appropriate for your site’s new home.

At this point, I’ve gotten a web site that’s running well in the cloud, but in the spirit of the cloud, I’ve also got an aging comment system that I replaced with Disqus, a cloud-hosted commenting system, which is the subject of my next post. Don’t miss it!

0 comments




Moving My Site to Azure: ASP.NET MVC 2

In our last episode, I talked about the joy and wonder that is moving my site’s ISP-hosted SQL Server instance to SQL Azure. Once I had the data moved over and the site flipped to using the new database, I needed to move the site itself over, which brought joy and wonder all it’s own.

Moving to Visual Studio 2013

I haven’t had to do any major updates to my site since 2010 using Visual Studio 2010. At that time, the state of the art was ASP.NET MVC 2 and Entity Framework 4, which is what I used. And the combination was a pleasant experience, letting me rebuild my site from scratch quickly and producing a site that ran like the wind. In fact, it still runs like the wind. Unfortunately, Visual Studio 2012 stopped supporting MVC 2 (and no surprise, Visual Studio 2013 didn’t add MVC 2 support back). When I tried to load my web site project into Visual Studio 2013, it complained:

image

This version of Visual Studio is unable to open the following projects

 

This error message lets me know that there’s a problem and the migration report provides a handy link to upgrade from MVC 2 to MVC 3. The steps aren’t too bad and there’s even a tool to help, but had I followed them, loading the new MVC 3 version of my project into Visual Studio 2013 would’ve given me another error with another migration report and a link to another web page, this time helping me move from MVC 3 to MVC 4 because VS2013 doesn’t support MVC 3, either. And so now I’m thinking, halfway up to my elbows in the move to MVC 3 that Visual Studio 2013 doesn’t like, that maybe there’s another way.

It’s not that there aren’t benefits to move to MVC 4, but that’s not even the latest version. In fact, Microsoft is currently working on two versions of ASP.NET, ASP.NET MVC 5 and ASP.NET v.Next. Even if I do move my site forward two version of MVC, I’ll still be two versions behind. Of course, the new versions have new tools and new features and can walk my dog for me, but by dropping old versions on the floor, I’d left with the choices of running old versions of Visual Studio side-by-side with new ones, upgrading to new versions of MVC just to run the latest version of VS (even if I don’t need any of the new MVC features) or saying “screw it” and just re-writing my web site from scratch. This last option might seem like what Microsoft wants me to do so that they can stop supporting the old versions of MVC, but what’s to stop me from moving to AWS, Linux and Node instead of to ASP.NET v.Next? The real danger of dropping the old versions on the floor; not that I’ll move over to another platform, because I’m an Microsoft fanboy and my MSDN Subscription gives me the OS and the tools for free, but that large paying customers say “screw it” and move their web sites to something that their tools are going to support for more than a few years.

Luckily for me, there is another way: I can cheat. It turns out that if I want to load my MVC 2 project inside of Visual Studio 2013, all I have to do is remove a GUID from the csproj file inside the ProjectTypeGuids element. The GUID in question is listed on step 9 of Microsoft’s guide for upgrading from MVC 2 to MVC 3:

image

Removing {F85E285D-A4E0-4152-9332-AB1D724D3325} from your MVC 2 project so it will load in Visual Studio 2013

 

By removing this GUID, I give up some of the productivity tools inside Visual Studio, like easily adding a new controller. However, I’m familiar enough with MVC 2 that I no longer need those tools and being able to actually load my project into the latest version of Visual Studio is more than worth it. Andrew Steele provides more details about this hack in his most excellent StackOverflow post.

Now, to get my MVC 2 project to actually build and run, I needed a copy of the MVC 2 assemblies, which I got from NuGet:

image

Adding the MVC 2 NuGet package to my project inside Visual Studio 2013

 

With these changes, I could build my MVC 2 project inside Visual Studio 2013 and run on my local box against my SQL Azure instance. Now I just need to get it up on Azure.

Moving to Azure

Publishing my MVC 2 site to Azure was matter of right-clicking on my project and choosing the Publish option:

image

Publishing a web site to Azure using the Solution Explorer’s Publish option inside Visual Studio 2013

 

Selecting the Windows Azure Web Sites as the target and filling in the appropriate credentials was all it took to get my site running on Azure. I did some battle with the “Error to use a section registered as allowDefinition='MachineToApplication' beyond application level” bug in Visual Studio, but the only real issue I had was that Azure seemed to need the “Precompile during publishing” option set or it wasn’t able to run my MVC 2 views when I surfed to them:

 

image

Setting the “Precompile during publishing” option for Azure to run my MVC 2 views

 

With that setting in place, my Azure site just ran at the Azure URL I had requested: http://sellsbrothers.azurewebsites.net.

Where are we?

I’m a fan of the direction of ASP.NET v.Next. The order of magnitude reduction in working set, the open source development and the use of NuGet to designate pieces of the framework that you want are all great things. My objection is that I don’t want to be forced to move forward to new versions of a framework if I don’t need the features. If I am forced, then that’s just churn in working code that’s bound to introduce bugs.

Tune in next time and we’ll discuss the fun I had configuring the DNS settings to make Azure the destination for sellsbrothers.com and to add SSL to enable secure login for posting articles via AtomPub and Windows Live Writer.

0 comments




Moving My Site to Azure: The Database

In a world where the cloud is not longer the wave of the future, but the reality of the present, it seems pretty clear that it’s time to move sellsbrothers.com from my free ISP hosting (thanks securewebs.com!) to the cloud, specially Microsoft’s Azure. Of course, I’ve had an Azure account since its inception, but there has been lots of work to streamline the Azure development process in the last two years, so now should be the ideal time to jump in and see how blue the waters really are.

As with any modern web property, I’ve got three tiers: presentation, service and database. Since the presentation tier uses server-side generated UI and it’s implementation is bundled together with the service tier, there are two big pieces to move – the ASP.NET site implementation and the SQL Server database instance. I decided to move the database first with the idea that once I got it hosted on Azure, I can simply flip the connection string to point the existing site to the new instance while I was doing the work to move the site separately.

Deploy Database To Windows Azure SQL Database from SSMS

The database for my site does what you’d expect – it keeps track of the posts I make (like this one), the images that go along with each post, the comments that people make on each post, the writing and talks I give (shown on the writing page), book errata, some details about the navigation of the site, etc. In SQL Server Management Studio (SSMS), it looks pretty much like you’d expect:

image

sellsbrothers.com loaded into SQL Server Management Studio

 

However, before moving to Azure SQL Server, I needed a SQL Azure instance to move the data to, so I fired up the Azure portal and created one:

image

Creating a new SQL Azure database

 

In this case, I chose to create a new SQL Azure instance on a new machine, which Azure will spin up for us in a minute of two (and hence the wonder and beauty that is the cloud). I choose the Quick Create option instead of the Import option because the Import option required me to provide a .bacpac file, which was something I wasn’t familiar with. After creating the SQL Server instance and the corresponding server, clicking on the new server name (di5fa5p2lg in this case) gave me the properties of that server, including the Manage URL:

image

SQL Azure database properties

 

If you click on the Manage URL, you will have a web interface for interacting with your SQL Azure server, but more importantly for this exercise, the FQDN is what I needed to plug into SSMS so that I can connect to that server. I’ll need that in a minute, because in the meantime, I’d discovered what looked like the killer feature for my needs in the 2014 edition of SSMS:

image

Deploy Database to Windows Azure Database in SSMS 2014

 

By right-clicking on the database on my ISP in SSMS and choosing Tasks, I had the Deploy Database To Windows Azure SQL Database option. I was so happy to choose this option and see the Deployment Settings screen of the Deploy Database dialog:

Untitled-2

SSMS Deploy Database dialog

 

Notice the Server connection is filled in with the name of my new SQL Server instance on Azure. It started blank and I filled it in by pushing the Connect button:

Untitled-7

SSMS Connect to Server dialog

 

The Server name field of the Connect to Server dialog is where the FQDN we pulled from the Manage URL field of Azure database server properties screen earlier and the credentials are the same as I set when I created the database. However, filling in this dialog for the first time gave me some trouble:

Untitled-8

SQL Azure: Cannot open server ‘foo’ requested by the login

 

SQL Azure is doing the right thing here to keep your databases secure by disabling access to any machine that’s not itself managed by Azure. To enable access from your client, look for the “Set up Windows Azure firewall rules for this IP address” option on the SQL database properties page in your Azure portal. You’ll end up with a server firewall rule that looks like the following (and that you may want to remove when you’re done with it):

image

SQL Azure server firewall rules

 

Once the firewall has been configured, filling in the connection properties and starting the database deployment from my ISP to Azure was when my hopes and dreams were crushed:

image

SSMS Deploy Database: Operation Failed

 

Clicking on the Error links all reported the same thing:

Untitled-4

Error validating element dt_checkoutobject: Deprecated feature ‘String literals as column aliases’ is not supported by SQL Azure

 

At this point, all I could think was “what the heck is dt_checkoutobject” (it’s something that Microsoft added to my database), what does it mean for to use string literals as column aliases (it’s a deprecated feature that SQL Azure doesn’t support) and why would Microsoft deprecate a feature that they used themselves on a stored proc that they snuck into my database?! Unfortunately, we’ll never know the answer to that last question. However, my righteous indignation went away as I dug into my schema and found several more features that SQL Azure doesn’t support that I put into my own schema (primarily it was the lack of clustered indexes for primary keys, which SQL Azure requires to keep replicas of your database in the cloud). Even worse, I found one table that listed errata for my books that didn’t have a primary key at all and because no one was keeping track of data integrity, all of the data was in that table twice (I can’t blame THAT on Microsoft : ).

And just in case you think you can get around these requirements and sneak your database into SQL Azure w/o the updates, manually importing your data using a bacpac file is even harder, since you now have to make the changes to your database before you can create the bacpac file and you have to upload the file to Azure’s blob storage, which requires a whole other tool that Microsoft doesn’t even provide.

Making your Database SQL Azure-compatible using Visual Studio

To make my SQL database compatible with SQL Azure required changing the schema for my database. Since I didn’t want to change the schema for a running database on my ISP, I ended up copying the database from my ISP onto my local machine and making my schema changes there. Getting to point of SQL Azure-compatibility, however, required me to have the details of which SQL constructs SQL Azure supported and didn’t support. Microsoft provides overview guidance on the limitations of SQL Azure, but it’s not like having an automated tool that can check every line of your SQL. Luckily, Microsoft provides such a tool built into Visual Studio.

To bring Microsoft’ SQL compiler to bear to check for SQL Azure compatibility requires using VS to create a SQL Server Database Project and then pointing it at the database you’d like to import from (which is the one copied to my local machine from my ISP in my case). After you’ve imported your database’s schema, doing a build will check your SQL for you. To get VS to check your SQL for Azure-compatibility, simply bring up the project settings and choose Windows Azure SQL Database as the Target platform:

image

Visual Studio 2014: Setting Database Project Target Platform

 

With this setting in place, compiling your project will tell you what’s wrong with your SQL from an Azure point-of-view. Once you’ve fixed your schema (which may require fixing your data, too), then you can generate a change script that updates your database in-place to make it Azure-compatible. For more details, check out Bill Gibson’s excellent article Migrating a Database to SQL Azure using SSDT.

The Connection String

Once the database has been deployed and tested (SSMS or the Manage URL are both good ways to test that your data is hosted the way you think it should be), then it’s merely a matter of changing the connection string to point to the SQL Azure instance. You can compose the connection string yourself or you can choose the “View connection strings for ADO.NET, ODBC, PHP and JDBC” option from your database properties page on Azure:

image

SQL Azure: Connection Strings

 

You’ll notice that while I blocked out some of the details of the connection string in my paranoia, that Azure itself is too paranoid to show the password; don’t forget to insert it yourself and to put it into a .config file that doesn’t make it into the SCCS.

Where are we?

In porting sellsbrothers.com from an ISP to Azure, I started with the database. The tools are there (nice tools, in fact), but you’ll need to make sure that your database schema is SQL Azure-compatible, which can take some doing. In the next installment, I’ll talk about how I moved the implementation of the site itself, which was not trivial, as it is implemented in ASP.NET MVC 2, which has been long abandoned by Microsoft.

If you’d like to check out the final implementation in advance of my next post, you help yourself to the sellsbrothers.com project on github. Enjoy.

0 comments




Bringing The Popular Tech Meetups to Portland

pdx-tech-meetup-logoI’ve been watching the Portland startup scene for years. However, in the last 12 months, it’s really started to take off, so when I had an opportunity to mentor at the recent Portland Startup Weekend, I was all over it. I got to do and see all kinds of wonderful things at PDXSW, but one of the best was meeting Thubten Comerford and Tyler Phillipi. Between the three of us, we’re bringing the very popular Tech Meetup conference format to Portland.

The idea of a Tech Meetup is meant to be focused on pure tech. In fact, at the largest of the Tech Meetups in New York (33,000 members strong!), they have a rule where it’s actually rude to ask about the business model. The Tech Meetups are tech for tech’s sake. If you’re in a company big or small or if you’re just playing, cool tech always has a place at the Portland Tech Meetup.

The format is simple and if you’re familiar with the way they do things in Boulder or Seattle, you’re already familiar with it. Starting on January 20th, 2014, every 3rd Monday at 6pm, we’ll open the doors for some networking time, providing free food and drink to grease the skids. At 7pm, we’ll start the tech presentation portion of the evening, which should be at least five tiny talks from tech presenters of all kinds. After the talks, we’ll wrap up around 8pm and then head to the local water hold for the debrief.

If this sounds interesting to you, sign up right now!

If you’d like to present, drop me a line!

If you’d like to sponsor, let Thubten know.

We’re very excited about bringing this successful event to Portland, so don’t be shy about jumping in; the water is fine…

0 comments




2635 older posts       No newer posts