# Planet LILUG

## May 19, 2013

### Justin Dearing

#### Creating a minimally viable Centos instance for SSH X11 Forwarding

I recently need to setup a CentOS 6.4 vm for development Java development. I wanted to be able to run Eclipse STS and on said vm and display the X11 Windows remotely on my Windows 7 desktop via XMing. I saw no reason for the CentOS VM to have a local X11 server. I’m quite comfortable with the Linux command line. I decided to share briefly on how to go from a CentOS minimal install to something actually useful for getting work done.

• /usr/bin/man The minimal install installs man pages, but not the man command. This is an odd choice. yum install man will fix that.
• vim There is a bare bones install of vim included by default that is only accessible via vi. If  you want a more robust version of vim, yum install vim.
• X11 forwarding You need the xauth package and fonts. yum install xauth will allow X11 forwarding to work. yum groupinstall fonts will install a set of fonts.
• A terminal for absolute minimal viability yum install xterm will give  you a terminal. I prefer terminator, which is available through rpmforge.
• RpmForge (now repoforge) Centos is based on Red Hat Enterprise Linux. Therefore it focuses on being a good production server, not a developer environment. You will probably need rpmforge to get some of the packages you want. The directions for adding Rpmforge to your yum repositories are here.
• terminator This is my terminal emulator of choice. One you added rpmforge, yum install rpmforge
• gcc, glibc, etc Honestly, you can usually live without these if you stick to precompiled rpms, and you’re not using gcc for development. If you need to build a kernel module, yum install kernel-devel gcc make should get you what out need.

From here, you can install the stuff you need for your development environment for your language, framework, and scm of choice.

## May 11, 2013

### Justin Dearing

#### When your PowerShell cmdlet doesn’t return anything, use -PassThru

The other day I was mounting an ISO in Windows 8 via the Mount-DiskImage command. Since I was mounting the disk image in a script, I needed to know the drive letter it was mounted to so the script could access the files contained within. However, Mount-DiskImage was not returning anything. I didn’t want to go through the hack of listing drives before and after I mounted the disk image, or explicitly assigning the drive letter. Both would leave me open to race conditions if another drive was mounted by another process while my script ran. I was at a loss for what to do.

Then, I remembered the -PassThru parameter, which I am quite fond of using with Add-Type. See certain cmdlets, like Mount-DiskImage, and Add-Type don’t return pipeline output by default. For Add-Type, this makes sense. You rarely want to see a list of the types you just added, unless your exploring the classes in a DLL from the command like. However, for Mount-DiskImage, defaulting to no output was a questionable decision IMHO.

Now in the case of Mount-DiskImage, -PassThru doesn’t return the drive letter. However, it does return an object that you can pipe to Get-Volume which does return an object with a DriveLetter property. To figure that out, I had to ask on stackoverflow.

tl;dr: If your PowerShell cmdlet doesn’t return any output, try -PassThru. If you need the drive letter of a disk image mounted with Mount-DiskImage, pipe the output through Get-Volume.

For a more in depth treatise of -PassThru, check out this script guy article by Ed Wilson(blog|twitter).

#### Getting the Drive Letter of a disk image mounted with WinCdEmu

In my last post, I talked about mounting disk images in Windows 8. Both Windows 8 and 2012 include native support for mounting ISO images as drives. However, in prior versions of Windows you needed a third party tool to do this. Since I have a preference for open source, my tool of choice before Windows 8 was WinCdEmu. Today, I decided to see if it was possible to determine the drive letter of an ISO mounted by WinCdEMu with PowerShell.

A quick search of the internet revealed that WinCdEmu contained a 32 bit command line tool called batchmnt.exe, and a 64 bit counterpart called batchmnt64.exe. These tools were meant for command line automation. While I knew there would be no .NET libraries in WinCdEmu, I did have hope there would be a COM object I could use with New-Object. Unfortunately, all the COM objects were for Windows Explorer integration and popped up GUIs, so they were inappropriate for automation.

Next I needed to figure out how to use batchmnt. For this I used batchmnt64 /?.

C:\Users\Justin>"C:\Program Files (x86)\WinCDEmu\batchmnt64.exe" /?
BATCHMNT.EXE - WinCDEmu batch mounter.
Usage:
batchmnt <image file> [<drive letter>] [/wait] - mount image file
batchmnt /unmount <image file>         - unmount image file
batchmnt /unmount <drive letter>:      - unmount image file
batchmnt /check   <image file>         - return drive letter as ERORLEVEL
batchmnt /unmountall                   - unmount all images
batchmnt /list                         - list mounted

C:\Users\Justin>

Mounting and unmounting are trivial. The /list switch produces some output that I could parse into a PSObject if I so desired. However, what I really found interesting was batchmnt /check. The process returned the drive letter as ERORLEVEL. That means the ExitCode of the batchmnt process. If you ever programmed in a C like language, you know your main function can return an integer. Traditionally 0 means success and a number means failure. However, in this case 0 means the image is not mounted, and a non zero number is the ASCII code of the drive letter. To get that code in PowerShell is simple:

$proc = Start-Process -Wait  "C:\Program Files (x86)\WinCDEmu\batchmnt64.exe"  -ArgumentList '/check', '"C:\Users\Justin\SQL Server Media\2008R2\en_sql_server_2008_r2_developer_x86_x64_ia64_dvd_522665.iso"' ` -PassThru; [char]$proc.ExitCode

The Start-Process cmdlet normally returns immediately without output. The -PassThru switch makes it return information about the process it created, and -Wait make the cmdlet wait for the process to exit, so that information includes the exit code. Finally to turn that ASCII code to the drive letter we cast with [char].

## May 05, 2013

### Josef "Jeff" Sipek

#### Instrument Flying

I was paging through a smart collection in Lightroom, when I came across a batch of photos from early December that I did not share yet. (A smart collection is filter that will only show you photos satisfying a predicate.)

On December 2nd, one of the people I work with (the same person that told me exactly how easy it is to sign up for lessons) told me that he was going up to do a couple of practice instrument approaches to Jackson (KJAX) in the club’s Cessna 182. He then asked if I wanted to go along. I said yes. It was a warm, overcast day…you know, the kind when the weather seems to sap all the motivation out of you. I was going to sit in the back (the other front seat was occupied by another person I work with — also a pilot) and play with my camera. Below are the some of the better shots; there are more in the gallery.

US-127 and W Berry Rd:

The pilot:

The co-pilot:

On the way back to Ann Arbor, we climbed to five thousand feet, which took us out of the clouds. Since I was sitting in the back, I was able to swivel around and enjoy the sunset on a completely overcast day. The experience totally made my day. After I get my private pilot certificate, I am definitely going to consider getting instrument rated.

The clouds were very fluffy.

## May 03, 2013

### Justin Dearing

#### Setting the Visual Studio TFS diff and merge tools with PowerShell

I recently wrote this script to let me quickly change the diff and merge tools TFS uses from PowerShell. I plan to make it a module and add it to the StudioShell Contrib package by Jim Christopher (blog|twitter). For now, I share it as a gist and place it on this blog.

The script supports Visual Studio 2008-2012 and the following diff tools:

Enjoy!

## April 28, 2013

### Josef "Jeff" Sipek

#### Change Ringing - The Changes

We have seen what a bell tower set up for change ringing looks like; we have looked at the mechanics of ringing a single bell and what it sounds like if you ring the bells in what is called rounds (all bells ring one after each other in order of pitch, starting with the treble and ending with the tenor).

Ringing rounds is good practice, but ringing would be really boring if that was all there was. Someone at some point decided that it’d be fun for one of the ringers to be a conductor, and direct the other ringers to do the most obvious thing — swap around. So, for example, suppose we have 6 bells, the treble is the first, and the tenor is the last. First, we get rounds by ringing all of them in numerical order:

123456

Then, the conductor makes a call telling two bells to change around. For example, say that the conductor says: 5 to 3. This tells the person ringing bell number 5 that the next hand stroke (I completely skipped over this part in the previous post, but bell strikes come in pairs: hand stroke, and back stroke) he should follow the bell number 3. In other words, the new order will be:

123546

You can see that in addition to the 5 changing place, the 4 had to move too! Now, it is following the 5.

Until the next call, the bells go in this order. Then the conductor may say something like: 3 to 1, or 3 to treble. Just as before, 2 bells move. This time, it is the 2 and the 3, yielding:

132546

Let’s have another call…5 to 3. Now, we have:

135246

This pattern (all odd bells in increasing order, followed by all even bells in increasing order) is called Queens. There are many such patterns.

Ringing traditionally starts in rounds, and ends in rounds. So, let’s have a few calls and return the bells to rounds.

 3 to the lead (this means that 3 will be the first bell) 315246 4 to 5 315426 4 to the treble 314526 2 to 4 314256 treble lead 134256 2 to 3 132456 rounds next 123456

There we have it. We’re back in rounds. There was nothing special about the order of these changes. Well, there is one rule: the bells that are changing places must be adjacent. So, for example, if we start in rounds, we can’t do 4 to the treble. Why is that? These bells are heavy, and especially the heavier ones (>10  cwt) will not move that far easily. Remember, this is bell ringing, not wrestling.

## April 22, 2013

### Josef "Jeff" Sipek

#### Plotting with ggmap

Recently, I came across ggmap package for R. It supposedly makes for some very easy plotting on top of Google Maps or OpenStreetMap. I grabbed a GPS recording I had laying around, and gave it a try.

You may recall my previous attempts at plotting GPS data. This time, the data file I was using was recorded with a USB GPS dongle. The data is much nicer than what a cheap smartphone GPS could produce.

time   ept      lat       lon   alt   epx    epy mode
1 1357826674 0.005 42.22712 -83.75227 297.7 9.436 12.755    3
2 1357826675 0.005 42.22712 -83.75227 297.9 9.436 12.755    3
3 1357826676 0.005 42.22712 -83.75227 298.1 9.436 12.755    3
4 1357826677 0.005 42.22712 -83.75227 298.4 9.436 12.755    3
5 1357826678 0.005 42.22712 -83.75227 298.6 9.436 12.755    3
6 1357826679 0.005 42.22712 -83.75227 298.8 9.436 12.755    3

For this test, I used only the latitude, longitude, and altitude columns. Since the altitude is in meters, I multiplied it by 3.2 to get a rough altitude in feet. Since the data file is long and goes all over, I truncated it to only the last 33 minutes.

The magical function is the get_map function. You feed it a location, a zoom level, and the type of map and it returns the image. Once you have the map data, you can use it with the ggmap function to make a plot. ggmap behaves a lot like ggplot2’s ggplot function and so I felt right at home.

Since the data I am trying to plot is a sequence of latitude and longitude observations, I’m going to use the geom_path function to plot them. Using geom_line would not produce a path since it reorders the data points. Second, I’m plotting the altitude as the color.

Here are the resulting images:

Here’s the entire script to get the plots:

#!/usr/bin/env Rscript

library(ggmap)

/* get the bounding box... left, bottom, right, top */
loc <- c(min(x$lon), min(x$lat), max(x$lon), max(x$lat))

print(type)
map <- get_map(location=loc, zoom=13, maptype=type)
p <- ggmap(map) + geom_path(aes(x=lon, y=lat, color=alt*3.2), data=x)

jpeg(paste(type, "-preview.jpg", sep=""), width=600, height=600)
print(p)
dev.off()

jpeg(paste(type, ".jpg", sep=""), width=1024, height=1024)
print(p)
dev.off()
}

## April 20, 2013

### Josef "Jeff" Sipek

#### Matthaei Botanical Gardens

Back in early February, Holly and I went to the Matthaei Botanical Gardens. I took my camera with me. After over two months of doing nothing with the photos, I finally managed to post-process some of them. I have no idea what the various plants are called — I probably should have made note of the signs next to each plant. (photo gallery)

This one didn’t turn out as nicely as I hoped. Specifically, it is a little blurry. Maybe I’ll go back at some point to retake the photo.

This one is just cool.

I think this is some kind of Aloe.

## April 19, 2013

### Josef "Jeff" Sipek

#### IPS: The Manifest

In the past, I have mentioned that IPS is great. I think it is about time I gave you more information about it. This time, I’ll talk about the manifest and some core IPS ideals.

IPS, Image Packaging System, has some really neat ideas. Each package contains a manifest. The manifest is a file which list actions. Some very common actions are “install a file at path X,” “create a symlink from X to Y,” as well as “create user account X.” The great thing about this, is that the manifest completely describes what needs to be done to the system to install a package. Uninstalling a package simply undoes the actions — delete files, symlinks, users. (This is where the “image” in IPS comes from — you can assemble the system image from the manifests.)

For example, here is the (slightly hand edited) manifest for OpenIndiana’s rsync package:

set name=pkg.fmri value=pkg://openindiana.org/network/rsync@3.0.9,5.11-0.151.1.7:20121003T221151Z
set name=org.opensolaris.consolidation value=sfw
set name=variant.opensolaris.zone value=global value=nonglobal
set name=description value="rsync - faster, flexible replacement for rcp"
set name=variant.arch value=i386
set name=pkg.summary value="rsync - faster, flexible replacement for rcp"
set name=pkg.description value="rsync - A utility that provides fast incremental file transfer and copy."
set name=info.classification value="org.opensolaris.category.2008:Applications/System Utilities"
dir group=sys mode=0755 owner=root path=usr
dir group=bin mode=0755 owner=root path=usr/bin
dir group=sys mode=0755 owner=root path=usr/share
dir group=bin mode=0755 owner=root path=usr/share/man
dir group=bin mode=0755 owner=root path=usr/share/man/man1
dir group=bin mode=0755 owner=root path=usr/share/man/man5
chash=3b72b91c9315427c1994ebc5287dbe451c0731dc
elfarch=i386 elfbits=32
elfhash=1d3feb5e8532868b099e8ec373dbe0bea4f218f1
group=bin mode=0555 owner=root path=usr/bin/rsync
pkg.csize=191690 pkg.size=395556
file 7bc01c64331c5937d2d552fd93268580d5dd7c66
chash=328e86655be05511b2612c7b5504091756ef7e61
group=bin mode=0444 owner=root
path=usr/share/man/man1/rsync.1 pkg.csize=50628
pkg.size=165934
file 006fa773e1be3fecf7bbfb6c708ba25ddcb0005e
chash=9e403b4965ec233c5e734e6fcf829a034d22aba9
group=bin mode=0444 owner=root
path=usr/share/man/man5/rsyncd.conf.5
pkg.csize=12610 pkg.size=37410
depend fmri=consolidation/sfw/sfw-incorporation type=require
depend fmri=system/library@0.5.11-0.151.1.7 type=require

The manifest is very easily readable. It is obvious that there are several sets of actions:

specifies the FMRI, description, and architecture among others
directories
lists all the directories that need to be created/deleted during installation/removal
specifies the file with the text of the license for the package
files
in general, most actions are file actions — each installs a file
dependencies
lastly, rsync depends on system/library and sfw-incorporation

The above example is missing symlinks, hardlinks, user accounts, services, and device driver related actions.

Many package management systems have the ability to execute arbitrary scripts after installation or prior to removal. IPS does not allow this since it would violate the idea that the manifest completely describes the package. This means (in theory), that one can tell IPS to install the base packages into a directory somewhere, and at the end one has a working system.

It all sounds good, doesn’t it? As always, the devil is in the details.

First of all, sometimes there’s just no clean way to perform all package setup at install time. One just needs a script to run to take care of the post-install configuration. Since IPS doesn’t support this, package developers often create a transient  SMF manifest and let SMF run the script after the installation completes. This is just ugly, but not the end of the world.

#### Requests?

I’m going to try something new. Instead of posting a random thought every so often, I’m going to take requests. What do you want me to talk about next?

#### Math test

I decided to finally implement some math support. Here’s a test post.

I hope equation support will come in handy.

## February 13, 2013

### Josef "Jeff" Sipek

#### FAST 2013

Since FAST starts today, yesterday was dedicated to flying out to San Jose.

Once at KDTW, I spent most of my wait there watching planes at the gates as well as watching more planes take off on 22L. Somehow, it was fascinating to watch them land on 22L and see 22R in the background — the same 22R that I got to do touch and go’s on a couple of weeks ago. I think not having to aviate first let me enjoy the sights — planes large and small barrelling down the runway and then *poof* they gently lift off the runway. At about 500 feet the gear retracts. It’s magic!

At one point, I saw the plane at the adjacent gate being prepared for its next flight. I both enjoyed seeing and sympathized with one of the crew (I assume the first officer since I suspect the captain wanted to stay warm) walking around the plane visually inspecting it. I know how annoying it is to be outside when it is cold to make sure the plane is safe to fly, yet I find it comforting that the same rules apply not only to Cessna 172s but also to Airbus A320s.

The first leg of the trip took me to KSLC. I brought my copy of the FAR/AIM with me. I read a bunch. I looked out the window a bunch. After we got past Lake Michigan, the sky cleared up allowing me to watch the ground below instead of the layer of overcast. I was very surprised to discover that the snow covered landscape makes it very easy to spot airports. Well, it is easy to spot paved runways that have been plowed.

The approach to KSLC was pretty cool. I never thought about the landscape in Utah before, but it turns out that Salt Lake City is surrounded by some serious mountains. Now, throw in winter weather with overcast and you’ll end up with a sea of white except for a few places where the mountains are peaking through.

Learning to fly in southeastern Michigan doesn’t make you think about mountains — there just aren’t any. Seeing the mountains peeking through the clouds was a scary reminder that there are more things in the sky than just other airplanes and some towers. If one were flying VFR above the clouds (which is a bad idea), where would be a safe place to descend? Obviously not where the mountains peak through, but any other place might be just as bad. The best looking place could have a mountain or a ridge few hundred feet below the cloud tops. Granted, sectional charts would depict all the mountains but it is a dangerous game to play.

I knew we would end up descending through the overcast and so I played a little game I expected to lose. Once we were in the clouds, I tried to keep track of our attitude by just sensing the forces. I knew I would fail, but I thought it would be interesting to try my best. We spent maybe 90 to 120 seconds in the clouds. At the end, I definitely felt like we were in a right bank —  Spatial disorentation. I knew that we probably weren’t, but without visual information to fix up my perception there was no way for me to know.

We landed. I watched all the airport signs and markings, following our progress on an airport diagram. Once people started getting off the plane, I decided to ask to see the airworthiness certificate. The first officer (I think) found all the paperwork in the cockpit and showed me. It was really cool to see the same form I see every time I fly the 172 but filled out for an A320. (Theirs was laminated!) We chatted for a little bit about what I fly, and how it’s a good plane. It was fun.

It was time to get to my connecting flight. Nothing interesting happened. I spent about half the flight watching the outside and half reading my book.

After arriving to KSJC, I got up from my seat in the small but comfy plane (CRJ2000). I grabbed my backpack from the overhead bin with one hand since the other hand not only had my hoodie draped over but was holding the FAR/AIM. I started filing out. All that was left to do was give the thank-you-for-landing-safely-and-not-killing-me nod to the crew as I exited the plane. The captain or FO happened to be standing in the cockpit door saying good bye to passengers. I nodded as planned. He responded: “good book.” I smiled.

## January 20, 2013

### Nate Berry

#### Installing Cyanogenmod on ASUS Transformer TF101

Back in December, 2011 when I first got my ASUS Transformer TF101 it came loaded with android 3.2 (Honeycomb). The OS was pretty slick, though Honeycomb was only the first tablet-ready android so there wasn’t a lot of polish to it. Over time I accepted the various ASUS-vetted updates to the tablet and eventually got [...]

## January 19, 2013

### Josef "Jeff" Sipek

#### Useless reinterpret_cast in C++

A few months ago (for whatever reason, I didn’t publish this post earlier), I happened to stumble on some C++ code that I had to modify. While trying to make things work, I happened to get code that essentially was:

uintptr_t x = ...;
uintptr_t y = reinterpret_cast<uintptr_t>(x);

Yes, the cast is useless. The actual code I had was much more complicated and it wasn’t immediately obvious that ‘x’ was already a uintptr_t. Thinking about it now, I would expect GCC to give a warning about a useless cast. What I did not expect was what I got:

foo.cpp:189:3: error: invalid cast from type "uintptr_t {aka long unsigned int}"
to type "uintptr_t {aka long unsigned int}"

Huh? To me it seems a bit silly that the compiler does not know how to convert from one type to the same type. (For what it’s worth, this is GCC 4.6.2.)

Can anyone who knows more about GCC and/or C++ shed some light on this?

#### Serial Console

Over the past couple of days, I’ve been testing my changes to the crashdump core in Illumos. (Here’s why.) I do most of my development on my laptop — either directly, or I use it to ssh into a dev box. For Illumos development, I use the ssh approach. Often, I end up using my ancient desktop (pre-HyperThreading era 2GHz Pentium 4) as a test machine. It gets pretty annoying to have a physical keyboard and monitor to deal with when the system crashes. The obvious solution is to use a serial console. Sadly, all the “Solaris serial console howtos” leave a lot to be desired. As a result, I am going to document the steps here. I’m connecting from Solaris to Solaris. If you use Linux on one of the boxes, you will have to do it a little differently.

#### Test Box

First, let’s change the console speed from the default 9600 to a more reasonable 115200. In /etc/ttydefs change the console line to:

console:115200 hupcl opost onlcr:115200::console

Second, we need to tell the kernel to use the serial port as a console. Here, I’m going to assume that you are using the first serial port (i.e., ttya). So, open up your Grub config (/rpool/boot/grub/menu.lst assuming your root pool is rpool) and find the currently active entry.

You’ll see something like this:

title openindiana-8
findroot (pool_rpool,0,a)
bootfs rpool/ROOT/openindiana-8
splashimage /boot/splashimage.xpm
foreground FF0000
background A8A8A8
kernel$/platform/i86pc/kernel/$ISADIR/unix -B $ZFS-BOOTFS module$ /platform/i86pc/$ISADIR/boot_archive We need to add two options. One to tell the kernel to use the serial port as a console, and one to tell it the serial config (rate, parity, etc.). You’ll want to change the kernel$ line to:

kernel$/platform/i86pc/kernel/$ISADIR/unix -B $ZFS-BOOTFS,console=ttya,ttya-mode="115200,8,n,1,-" -k Note that we appended the options with commas to the existing -B. If you do not already have a -B, just add it and the two new options. The -k will make the kernel drop into the debugger when bad things happen. You can omit it if you just want a serial console without the debugger getting loaded. There’s one last thing left to do. Let’s tell grub to use the same serial port and not use a splash image. This can be done by adding these lines to the top of your menu.lst: serial --unit=0 --speed=115200 terminal serial and removing (commenting out) the splashimage line. So, what happens if you make all these changes and then beadm creates a new BE? The right thing! beadm will copy over all the kernel options so your new BE will just work. #### Dev Box I use OpenIndiana on my dev box. I could have used minicom, but I find minicom to be a huge pain unless you have a modem you want to talk to. I’m told that screen can talk to serial ports as well. I decided to keep things super-simple and configured tip. First, one edits /etc/remote. I just changed the definition for hardwire to point to the first serial port (/dev/term/a) and use the right speed (115200): hardwire:\ :dv=/dev/term/a:br#115200:el=^C^S^Q^U^D:ie=%$:oe=^D:

Then, I can just run a simple command to get the other system:

$tip hardwire ## January 11, 2013 ### Nate Berry #### Youtube TV control from my android tablet Ive got a small and very underpowered System76 meerkat hooked up to my LCD TV. The meerkat is the first “net top” they released in 2009 which you can think of as the desktop equivalent of a netbook (remember those?) and shipped with an Atom processor, one GB of RAM and an 80GB hard drive. [...] ## January 06, 2013 ### Justin Dearing #### Announcing SevenZipCmdLine.MSBuild This was a quick and dirty thing born out of necessity, and need to make zip files of PoshRunner so I could make its chocolatey package. I made MSBuild tasks for creating 7zip and zip files out of the$(TargetDir) of an MSBuild project. There is a nuget package for it. Simply include it in your project via nuget and build it from the command line with the following command line:

%windir%\microsoft.net\framework\v4.0.30319\msbuild __PROJECT_FOLDER__\__PROJECT_FILE__ /t:SevenZipBin,ZipBin

This will create project.zip and project.7z in __PROJECT_FOLDER__\bin\Target. To see how to override some of the defaults, look at this msbuild file in PoshRunner.

Source code is available via a github repo, and patches are welcome!

#### PoshRunner now on SourceForge and Chocolatey

I’ve been periodically hacking away at PoshRunner. I have lots of plans for it. Some of these are rewriting some of it in C++, allowing you to log output to MongoDB and total world domination! However, today’s news is not as grand.

The first piece of news is I made a PoshRunner sourceforge project to distribute the binaries. To download the latest version, click here. Secondly, there is now a PoshRunner chocolatey package, so you can install it via chocolatey. Finally, there is not a lot of documentation on PoshRunner.exe, so here is the output of poshrunner -help.

Usage: poshrunner.exe [OPTION] [...]

Options:
--appdomainname=NAME                                     Name to give the AppDomain the PowerShell script executes in.
--config=CONFIGFILE                                      The name of the app.config file for the script. Default is scriptName.config
-f SCRIPT, --script=SCRIPT                               Name of the script to run.
-h, --help                                               Show help and exit
--log4netconfig=LOG4NETCONFIGFILE                        Override the default config file for log4net.
--log4netconfigtype=LOG4NETCONFIGTYPE                    The type of Log4Net configuration.
-v, --version                                            Show version info and exit

## January 04, 2013

#### Correctly Verifying an Email Address

Some services that accept email addresses want to ensure that these email addresses are valid.

There are multiple aspects to an email being valid:
1. The address is syntactically valid.
2. An SMTP server accepts mail for the address.
4. The address belongs to the person submitting it.

How does one verify an email address? I'll start with the wrong solutions and build up the correct one.

### Possibility #0 - The Regular Expression

Discussions on a correct regular expression to parse email addresses are endless. They are almost always wrong. Even really basic pattern matching such as *@*.* is wrong: it will reject the valid email address n@ai.[5]

Even a fully correct regular expression does not tell you if the mailbox is valid or reachable.

This scores 0/4 on the validity checking scale.

### Possibility #1 - The VRFY Command

The oldest mechanism for verifying an email address is the VRFY mechanism in RFC821 section 4.1.1:

VERIFY (VRFY) This command asks the receiver to confirm that the argument identifies a user. If it is a user name, the full name of the user (if known) and the fully specified mailbox are returned.

However this isn't sufficient. Most SMTP servers disable this feature for security and anti-spam reasons. This feature could be used to enumerate every username on the server to perform more targeted password guessing attacks:

Both SMTP VRFY and EXPN provide means for a potential spammer to test whether the addresses on his list are valid (VRFY)... Therefore, the MTA SHOULD control who is is allowed to issue these commands. This may be "on/off" or it may use access lists similar to those mentioned previously.

This feature wasn't guaranteed to be useful at the time the RFC was written:[1]

The VRFY and EXPN commands are not included in the minimum implementation (Section 4.5.1), and are not required to work across relays when they are implemented.

Finally, even if VRFY was fully implemented there is no guarantee that a human being reads the mail sent to that particular mailbox.

All of this makes VRFY useless as a validity checking mechanism so it scores 1/4 on the validity checking scale.

### Possibility #2 - Sending a Probe Message

With this method you try to connect with a mail server and pretends to send a real mail message but cut off before sending the message content. This is wrong for a for the following reasons:

A system administrator that disabled VRFY has a policy of not allowing for the testing for email addresses. Therefore the ability to test the email address by sending a probe should be considered a bug and must not be used.

The system might be set up to detect signs up of a probe such as cutting off early may rate limit or block the sender.

In addition, the SMTP may be temporarily down or the mailbox temporarily unavailable but this method provides no resilience against failure. This is especially true if this mechanism is attempting to provide real-time feedback to the user after submitting a form.

This scores 1/4 on the validity checking scale.

### Possibility #3 - Sending a Confirmation Mail

If one cares about if a human is reading the mailbox the simplest way to do so is send a confirmation mail. In the email include a link to a website (or set a special reply address) with some indication of what is being confirmed. For example, to confirm "user@example.com" is valid the link might be http://example.com/verify?email=user@example.com or http://example.com/verify?account=12345[2].

This method is resilient against temporary failures and forwarders. Temporary failures could be retried like a normal SMTP conversation.

This way it is unlikely that a non-human will trigger the verification email[3]. This approach solves some of the concerns, it suffers from a fatal flaw:

It isn't secure. It is usually trivial to guess the ID number, email account, other identifier. An attacker could sign up with someone else's email account and then go to the verification page for that user's account. It might be tempting to use a random ID but randomness implementations are usually not secure.

This scores 3/4 on the validity checking scale

### Possibility #4 - Sending a Confirmation Mail + HMAC

The correct solution is to send a confirmation, but include a MAC of the identifier in the verification mechanism (reply, or url) as well. A MAC is a construction used to authenticate a message by combining a secret key and the message contents. One family of constructions, HMAC, is a particularly good choice. This way the url might become http://example.com/verify?email=user@example.com&mac=74e6f7298a9c2d168935f58c001bad88[4]

Remember that the HMAC is a specific construction, not a naive hash. It would be wise to use a framework native function such as PHP's hash_hmac. Failing to include a secret into the construction would make the MAC trivially defeated by brute force.

This scores 4/4 on the validity checking scale

### Closing Notes

Getting email validation right is doable, but not as trivial as many of the existing solutions make it seem.

1. Note that RFC1123 more specifically spells out that VRFY MUST be implemented but MAY be disabled.

• This is not my luggage password.
• It is still possible for a auto-reply bot to trigger reply based verification schemes. Bots that click every link in received email are uncommon.
• This is HMAC-MD5. It isn't insecure as collisions aren't important for HMAC. I chose it because it is short.
• n@ai is a in-use email address by a person named Ian:
%dig +short ai MX
10 mail.offshore.ai.
• Thank you to bd for proofreading and reviewing this blog post.

## December 27, 2012

### Justin Dearing

#### “Forking” a long running command to a new tab with ConEmu. The magic of -new_console:c

Here’s a quick tip I’d thought I’d share after being quite rightly told to RTFM by the author of ConEmu.

Suppose you are running FarManager from ConEmu and want to update all your chocolatey packages. You can do so with the command cup all. However, that will block your FarManager session until the cup all completes. You have four options to fix this:

1. You can start a new tab in ConEmu with the menu. This is undesirable because you’re obviously a command line guy.
2. You press Shift+Enter after the cup all command. This is undesirable because unless you configure ConEmu to intercept every new command window, a regular console window will appear. Also, the console will close automatically upon completion.
3. You can type cup all & pause and hit Shift+Enter to allow the window to stay open. Or
4. You can type cup all -new_console:c to open a new tab that will execute the command, and not close upon completion.

Obviously I recommend option 4.

## December 24, 2012

#### #!/bin/bash considered harmful

When one writes a shell script there are a variety of shebang lines that could be used:

• #!/bin/sh
• #!/usr/bin/env bash
• #!/bin/bash

or one of many other options.

Of these only the first two are possibly correct.

Using #!/bin/bash is wrong because:

• Sometimes bash isn't installed.
• If it is installed, it may not be in /bin
• If it is in /bin, the user may have decided to set PATH to use a different installation of bash. Using an absolute path like this overrides the user's choices.
• bash shouldn't be used for scripts intended for portability

If you have bash specific code use #!/usr/bin/env bash. If you want more portable code try using Debian's checkbashism to find instances of non-POSIX compliant shell scripting.

## December 22, 2012

### Justin Dearing

#### How to reference the registry in MSBuild4.0 (Visual Studio 2010) and later on a 64 bit OS

In the past I’ve written about using the Windows Registry to reference  assembly paths in Visual Studio. In it I made reference to the seminal article New Registry syntax in MSBuild v3.5, which is the dialect Visual Studio 2008 speaks. That syntax has served me well until recently.

See fate lead me to writing a small C++/CLI program. In it I had to refer to some .NET assemblies that were not installed in the GAC. They were however installed as part of a software package that wrote its install path to the registry. So I figured out which value it wrote the install directory to and referenced it in the .vcxproj file using the $(Registry:HKEY_LOCAL_MACHINE\Software\Company\Product@TargetDir). Unfortunately, it didn’t work! I did some troubleshooting and discovered it worked when I build the vcxproj from the command line with msbuild.exe. It seemed logical to blame it one the fact that I was using C++. Devenv.exe (the Visual Studio executable) must have been treating .vcxproj files differently than csproj and vbproj files. Then suddenly it dawned it me, the problem was I was running on a 64 bit version of Windows! This was a relief, because it meant that .vcxproj were not special or subject to unique bugs. To make a long story short, Visual Studio is a 32 bit application, and by default when a 32 bit process interacts with the registry on a 64 bit version of Windows, HKEY_LOCAL_MACHINE\Software gets redirected to HKEY_LOCAL_MACHINE\Software\Wow6432Node. This MSDN article explains the gory details. At first it seemed the only workaround was a custom MSBuild task line the MSBuild Extension Pack. I complained on twitter to Scott Hanselman (blog|twitter). He replied with this article talking about how the page faults, addressable memory space, etc was not an issue. That article made some good points. However, it didn’t address my (at the time) very real and legitimate concern. Scott said he’d ask around internally if I filed a connect bug and got David Kean (blog|twitter) involved in the conversation. I filed a connect bug. Then David pointed out a link to the MSBuild 4.0 key GetRegistryValueFromView. Here is a comparison of the old and new syntax using msbuild <Message/> msbuild tasks, the printf() of msbuild. <Target Name="BeforeBuild"> <!-- Read the registry using the native MSBUILD 3.5 method: http://blogs.msdn.com/b/msbuild/archive/2007/05/04/new-registry-syntax-in-msbuild-v3-5.aspx --> <PropertyGroup> <MsBuildNativeProductId>$(Registry:HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion@ProductId)</MsBuildNativeProductId>
<MsBuildNativeProductName>$(Registry:HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion@ProductName)</MsBuildNativeProductName> <MsBuild4NativeProductId>$([MSBuild]::GetRegistryValueFromView('HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion', 'ProductId', null, RegistryView.Registry64))</MsBuild4NativeProductId>
<MsBuild4NativeProductName>$([MSBuild]::GetRegistryValueFromView('HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion', 'ProductName', null, RegistryView.Registry64))</MsBuild4NativeProductName> </PropertyGroup> <!-- Lets use the MSBuild Extension Pack (still no joy) http://www.msbuildextensionpack.com/help/4.0.5.0/html/9c8ecf24-3d8d-2b2d-e986-3e026dda95fe.htm --> <MSBuild.ExtensionPack.Computer.Registry TaskAction="Get" RegistryHive="LocalMachine" Key="SOFTWARE\Microsoft\Windows NT\CurrentVersion" Value="ProductId"> <Output PropertyName="MsBuildExtProductId" TaskParameter="Data" /> </MSBuild.ExtensionPack.Computer.Registry> <MSBuild.ExtensionPack.Computer.Registry TaskAction="Get" RegistryHive="LocalMachine" Key="SOFTWARE\Microsoft\Windows NT\CurrentVersion" Value="ProductName"> <Output PropertyName="MsBuildExtProductName" TaskParameter="Data" /> </MSBuild.ExtensionPack.Computer.Registry> <!-- And now RegistryView: http://msdn.microsoft.com/en-us/library/microsoft.win32.registryview.aspx --> <MSBuild.ExtensionPack.Computer.Registry TaskAction="Get" RegistryHive="LocalMachine" Key="SOFTWARE\Microsoft\Windows NT\CurrentVersion" Value="ProductId" RegistryView="Registry64"> <Output PropertyName="MsBuildExt64ProductId" TaskParameter="Data" /> </MSBuild.ExtensionPack.Computer.Registry> <MSBuild.ExtensionPack.Computer.Registry TaskAction="Get" RegistryHive="LocalMachine" Key="SOFTWARE\Microsoft\Windows NT\CurrentVersion" Value="ProductName" RegistryView="Registry64"> <Output PropertyName="MsBuildExt64ProductName" TaskParameter="Data" /> </MSBuild.ExtensionPack.Computer.Registry> <!-- All messages are of high importance so Visual Studio will display them by default. See: http://stackoverflow.com/questions/7557562/how-do-i-get-the-message-msbuild-task-that-shows-up-in-the-visual-studio-proje --> <Message Importance="High" Text="Using Msbuild Native: ProductId:$(MsBuildNativeProductId) ProductName: $(MsBuildNativeProductName)" /> <Message Importance="High" Text="Using Msbuild v4 Native: ProductId:$(MsBuild4NativeProductId) ProductName: $(MsBuild4NativeProductName)" /> <Message Importance="High" Text="Using Msbuild Extension Pack: ProductId:$(MsBuildExtProductId) ProductName: $(MsBuildExtProductName)" /> <Message Importance="High" Text="Using Msbuild Extension Pack: ProductId:$(MsBuildExt64ProductId) ProductName: $(MsBuildExt64ProductName)" /> </Target> That MSBuild code has been tested via this github project on two machines running Visual Studio 2010 SP1. One has Windows XP3 32 bit and the other runs Windows 8 64 bit. I’ve verified that$([MSBuild]::GetRegistryValueFromView('HKEY_LOCAL_MACHINE\SOFTWARE\whatever', 'value', null, RegistryView.Registry64)) will give you the same value as you see in regedit.exe

Yes MSBuild 4.0, and therefore Visual Studio 2010 solved this problem and I simply didn’t google hard enough for the answer. However, I googled pretty hard, and I’m pretty good at googling. I didn’t think I was particularly rash in “pulling the Hanselman card.” The best I can do is write this blog post, comment on other blogs and answer questions on StackOverflow to fill the internet with references to the MSBuild syntax.

## December 21, 2012

#### Cormen on Algorithms: Blogging my way through [1/?]

Two of my good friends recently started reading Introduction to Algorithms by Thomas H. Cormen, et. al. Being unable to resist peer pressure I decided to follow and read along.

I plan on blogging my way through the chapters writing my answers to the questions as I go through the book. Like most of my plans they don't always work out, but one could try.

Here it goes!

1.1-1: Give a real-world example in which each of the following computational problems appears: (a)Sorting, (b) Determining the best order for multiplying matrices, (c) finding the convex hull of a set of points.
Sorting - Sorting comes up in virtually every algorithm one could think of. Everything from optimizing monetary investments to efficient compression algorithms has to sort data at some point or another. A harder question might be: Name one non-trivial algorithm that doesn't require sorting.
Multiplying Matrices - graphics and scientific problems frequently require matrix operations.
Convex Hull - Collision detection for use in games, modeling biological systems, or other related work could make use of this
1.1-2: Other than speed what other measures of efficiency might one use in a real-world setting?
It is possible to optimize for (and against) every limited resource. For example minimizing the amount of memory usage is important for embedded applications (and desktop ones too). Reducing total disk I/O is important to increase the longevity of hard drives. On a less technical note optimizing for monetary cost or man hours expended is important too.
1.1-3: Select a data structure you have seen previously and discuss its strengths and limitations
One of the most interesting data structures I know is the Bloom Filter. It is a probabilistic data structure that can determine if an element is NOT in a set but can't determine definitively if an element is in a set. It works by hashing each element in a set to a fixed size bit array. It then ORs the hash with itself (which starts at all zeros). One can test to see if an element is in a set by generating the hash and testing to see if every bit set to 1 in the queried element is set to 1 in the filter. If it is then you have some degree of confidence that the element is in the set. Any negative means that what you are querying for has not been added.
While most probabilistic structures have certain properties in common, bloom filters have a number of interesting pros and cons.
1. A negative result is definitive - if a query returns that an element has not been added then one knows this to be 100% true.
2. Since hashes are fixed size the amount of memory a Bloom Filter uses is known and bounded.
3. Bloom filters can quickly become useless with large amounts of data. It is possible that every bit will be set to 1 which effectively makes the query a NOP.
4. It is impossible to remove data from a bloom filter. One can't just set all the bits of the hash to a zero because that might be removing other elements as well.
5. Without a second set of data there is no way to deterministically list all elements (unlike other probabilistic data structures such as Skip Lists).
1.1-4: How are the shortest path and traveling salesmen problems similar? How are they different?
The shortest path problem is
Given a weighted (undirected) graph G:, a start vertex $V_0$ and an end vertex $V_e$, find a path between $V_0$ and $V_e$ such that the sum of the weights is minimized. This could be expanded to $Given a weighted graph G:, find a path between every pair such that the sum of the weights for each path is minimized. Traveling salesman is defined as: Given a weighted, undirected, graph G: and a start vertex$V_0$find a path starting and ending at$V_0$such that it passes through every other vertex exactly once and the sum of the weights is minimized. The traveling salesman problem might make use of the shortest path problem repeatedly in order to come up with the correct solution. 1.1-5: Come up with a real-world problem in which only the best solution will do. Then come up with a problem in which a solution that is "approximately" the best will do? There are very few problems where one needs the objectively optimal solution. Mathematical questions are the only problems I could think of that need that level of accuracy. Virtually every problem needs a good enough solution. Some examples include finding a fast route for packets on the internet or locating a piece of data in a database. update 2011-06-30: modified text of answers 1.1-3 and 1.1-5 to be more clear. ## December 14, 2012 ### Justin Dearing #### Trouble purchasing an MSDN subscription Recently I’ve decided to purchase a Visual Studio 2012 Professional MSDN subscription. There are several reasons for this. First of all, my Visual Studio 2012 30 day trial ran out and I absolutely need the non-express edition of it for a side project. Secondly, I’d like to be able to test poshrunner in older versions of Windows. Thirdly, Having access to checked builds of Windows would allow me to lean more in my Windows Internals study group. I started my journey to an MSDN subscription on Saturday December 8th 2012. I was able to access my benefits Thursday December 12th. The four day journey was not pleasant. On Saturday I sat down credit card in hand and placed my order. I didn’t save the receipt (stupid I know). I got no confirmation email, and I did not see an authorization on my credit card. I waited. On Sunday I got notification that my order was pending. Perhaps they wanted to verify I wasn’t a software pirate. It seemed annoying that this wasn’t an instant process, but I remained patient and understanding. Then Tuesday I woke up to an email stating that my order was canceled. MSDN customer support hours are from 5:30PST to 17:30PST. I am on EST so I had to wait until 8:30 to call. I was already in the office at that time. I was told the bank did not accept my charge, but that if I placed the order again in 48 hours, the security check would be overridden and I would be able to download the software instantaneously I tried buying the MSDN license again. It failed, but instantaneously. I called my bank. I was told both authorizations were successful on their end. So I called Microsoft again. They claimed a system glitch prevented them from accepting the payment. The specific phrase “system glitch” was used consistently by several MSDN customer support representatives over several phone calls to describe instances when my bank authorized a charge but Microsoft rejected it. I never uttered that phrase once. I’m suspicious this is a common enough occurrence that there are procedures and guidelines in place documenting the “system glitch”. At this point they asked if I placed the second order from a computer on the same network as the first. I said no. The first order was placed at home and the second order was placed in the office. I was told to try again from the same network. I don’t have remote access to my home computer (take away my geek card) so I had to wait till I got home. I asked what would happen if it didn’t work when I tried again. I was told the only other option was to place the order over the phone, and that phone orders take three business days to process. I didn’t get home until after midnight so I didn’t try Tuesday night. ### Wednesday Wednesday I awoke and attempted to place the order. It failed. I went into the office, called customer support and attempted a phone order. It failed, because my bank decided three identical charges for$1,305.41 (Microsoft collects sales tax in NY on top of the $1199 base price) seemed suspicious. Luckily I am able to fix that by responding to a text message CitiBank sent me. A chat session and a call later and the purchase seems to have been resolved. I would have my subscription on Monday. ### Thursday Thursday I got a call saying my order was canceled. However, T-Mobile dropped the call before I could deal with it. When I had some free time I called CitiBank. The first operator gave me some free airline miles and transfered me to Ashley, the fraud department specialist. Ashley ensured me Microsoft could bang my credit card as often and as many times as they wanted to. I then called MSDN support and talked to Chris. I summarized the situation for Chris. I told him I didn’t want to wait another three days for a phone order. He said he had no power to deal with that. He determined my order from Wednesday was still going through. After putting me on hold a few times, he said he would get me a welcome email that would let me download my MSDN products in 30 minutes. I got his name and a case number and he did just that. I got a call back to ensure I was able to access my download, and everything worked just fine. I’m a little curious as to why his tune changed and he was able to get me my subscription number in thirty minutes though. ### Conclusion First of all I have to thank CitiBank for their actions. At no point did they do anything wrong or fail to do anything. Secondly, the customer service staff at MSDN were very professional and understanding, despite my growing irateness. However, the fact is they were never able to tell me why my order was canceled. If they at some point explained that I was flagged as a pirate, or something else, I’d be a bit more understanding. Thirdly, why does the process take so long? I was able to buy a new car in about an hour. It took a few days for delivery because the package I wanted wasn’t on the lot. However, it took less than four days for the car to be driver off the lot (by someone else because it was the car I learned stick on). The MSDN subscription sales model seems to make sense for businesses purchasing volumne licenses. They take checks, you can talk to a real person. Its not at all optimized for the person that wants to buy one MSDN license “right now”. People like me are on the lower end of the income bracket for Microsoft, but we are also the ones that are either really passionate hobbyists, entrepreneurs, or the people on the fence. While I’m still going to develop on the Microsoft stack for years, this experience has left a bad taste in my mouth for their purchase process, compared with for example JetBrains or RedGate. In the end the real issue was the lack of transparency. Its generally safe to assume that when you are buying software for online delivery, you will have it within an hour. If Microsoft made it clear its not as simple for them, first time subscribers like me would be a little more understanding. ## December 10, 2012 ### Justin Dearing #### Announcing ILRepack-BuildTasks ILMerge is a great tool for creating a single executable out of multiple .NET assemblies. However, it has two limitations. The first is that its not open source, and you’re not supposed to include a copy in your public source code repos. The second is its an executable and therefore needs to be called from the MSBuild post build event as opposed to a proper MSBuild task. Each problem had its own mutually exclusive solution. For the first problem, Francois Valdy (blog|twitter) wrote IL-Repack, an open source clone of ILMerge. So now I could have an exe that could be included in github repos. This allowed my projects (specifically poshrunner.exe) to have a merge step in the postbuild. ALthough this was still a clunky batch file embedded in the csproj, it just worked. For the second problem, Marcus Griep (blog|twitter) created ILMerge Tasks. Since the merging APIs in ILMerge are all exposed as public members, you can simply reference the exe as a dll. He did this in an MSBuild DLL. However, this dll still requires ILMerge.exe. These solutions are no longer mutually exclusive. I’ve forked ILMerge-tasks (and contacted Marcus to see if he wants to incorporate my changes). I had it reference ILRepack. The new project is called ILRepack-BuildTasks on github. Enjoy! #### A misleading SQL Error Message Error: 18456, Severity: 14, State: 38 On Friday I had to help a client out with an error that kept appearing in their event logs: Login failed for user ‘domain\user’. Reason: Failed to open the explicitly specified database. [CLIENT: 192.168.0.25] It took me a while to troubleshoot the error. The client’s internal system administrator (who was quite sharp) only had to call me in in the first place because the error was a little misleading. See the first thing I did when I saw that was audit login failures. In the trace, the database was listed as master. The user had full access to master. However, I later learned that the user was switching from master to a non-existent database, which was triggering this error. I figured this out thanks to Sadequl Hussain‘s article, SQL Server Error 18456: Finding the Missing Databases. Sadequl explains in detail the how and the why. However, the take home is you need to trace for User Error Message to get the message that tells you what database you are connecting to. This took me about an hour to solve. Honestly, it was a bit humbling of an experience. It took me an hour to figure out something a full time senior DBA would probably be able to solve in 15 minutes. However, I’ll probably be able to solve this error in 15 minutes myself go forward. Finally, the fact that it took me a while to find this one blog article that explained what the issue actually was proves how dependent I’ve become upon google. ## December 05, 2012 ### Justin Dearing #### The #MongoHelp twitter manifesto ## What is #mongohelp? #mongohelp is a hashtag on twitter that members of the mongo community use for support. ## What’s appropiate to tag #mongohelp? In order for something to be appropiate for appending the #mongohelp hash tag to one of the following two criteria must be met 1. You are asking a question related to MongoDb 2. You are @replying to a question #mongohelp with an answer or a request for clarification Those are the rules. You can reply with a recommendation for a commercial product or service, but please disclose if you work for partner with or own the product. You can’t make unsolicited promotions with this hash tag. You can’t post a link to your latest blog article to the tag, unless you are answering a question. ## Any other guidelines? Twitter is about instant gratification, so if you ask a question on #mongohelp, its expected you will be sticking around for 10-15 minutes for an answer. Also, if you have a long question to ask on #mongohelp, you should ask it in one of these forums, and link to the question. ## This seems awfully familar Your absolutely right! I borrowed the idea from Aaron Nelson’s (blog|twitter) proposal that was documented by Brent Ozar (blog|twitter) of creating the #sqlhelp hashtag. I’ve spent the last year speaking about MongoDb at SQL Saturdays and after observing both communities. Both communities are very self organized, and provide a lot of free help. The one thing I saw missing from the MongoDB community was a grassroots support tag to connect with others. ## November 26, 2012 ### Eitan Adler #### Don't Use gettimeofday(2) or time(3) for Profiling One common technique for profiling programs is to use the gettimeofday system call (with code that looks something like this): Example (incorrect) code that uses gettimeofday - click to view #include <time.h> #include <stdlib.h> #include <stdio.h> void function(void) { struct timeval before; struct timeval after; gettimeofday(&before, NULL); codetoprofile(); gettimeofday(&after, NULL); time_t delta = after.tv_sec - before.tv_sec; printf("%ld\n",delta); } However, using gettimeofday(2) or time(3) to obtain profiling information is wrong for many reasons: 1. Time can go backwards. In a virtualized environment this can happen quite often. In non-virtualized environments this can happen due to time zones. Note that even passing CLOCK_MONOTONIC to time(3) doesn't help as it can go backwards during a leap second expansion. 2. Time can change drastically for no reason. Systems with NTP enabled periodically sync their time with a time source. This can cause the system time to change by minutes, hours, or even days! 3. These functions measure Wall Clock time. Time spent on entirely unrelated processes is going to be included in the profiling data! 4. Even if you have disabled everything else on the system[1] the delta computed above includes both of User time and System Time. If your algorithm is very fast but the kernel has a slow implementation of some system call you won't learn much. 5. gettimeofday relies on the cpu clock which may differ across cores resulting in time skew. So what should be used instead? There isn't a good, portable, function to obtain profiling information. However there are options for those not tied to a particular system (or those willing to maintain multiple implementations for different systems. The getrusage(2) system call is one option for profiling data. This provides different fields for user time (ru_utime) and system time (ru_stime) at a relatively high level of precision and accuracy. Using DTraces profiling provider also seems to be a decent choice although I limited experience with it. Finally, using APIs meant to access hardware specific features such as FreeBSD's hwpmc is likely to provide the best results at the cost of being the least portable. Linux has similar features such as oprofile and perf. Using dedicated profilers such as Intel's vtunes[2] may also be worthwhile. 1. Including networking, background process swapping, cron, etc. 2. A FreeBSD version is available. update 2012-11-26: Include note about clock skew across cores. ## November 22, 2012 ### Justin Dearing #### Announcing poshrunner.exe so MyScript.ps1 can use MyScript.ps1.config instead of powershell.exe.config I have a tendency to do odd things with technology so that things don’t just work. When I point out the obscure edge cases I find, most people tell me, “well don’t do that.” I usually ignore them and dream of tilting windmills. Well today a windmill has been tilted, and this is the epic tale. I’m a developer that fell in love with PowerShell. As such I often call .NET API functions from powershell scripts. This usually just works. However, it kind of falls apart when you have to use settings in the app.config file. This means its basically impossible call functions from a DLL that use NHibernate, Entity Framework or WCF Service references. (However, WCF Services can be called direcctly from PowerShell quite easily) The solution is to run the PowerShell script in a new PowerShell Runspace in a second AppDomain that uses its own app.config. However, things quickly fall apart because you need to write three classes that inherit from PSHostRawUserInterface, PSHostUserInterface and PSHost respectively or else Write-Host will throw an exception. Now all this is a lot scarier than it sounds. However, it stops two important groups of people from ever using PowerShell to call DLLs that absolutely require you to manipulate your app.config: • People scared off by the word AppDomain • People that realize they have better things to do than everything I described above Lucky for these two groups of people, I wasted my time so they didn’t have to! The project is currently called AppDomainPoshRunner, and I ILMerge it (via IL-Repack) into poshrunner.exe. Right now poshrunner takes one command line argument, the path to a script. If the script exists it will run it in an AppDomain whose config file is scriptname.config. Log4net configuration is read from a file called ADPR.log4net.config in the same directory as poshrunner.config. The full background is to long and convoluted for this post. This was all born out of a problem with calling New-WebServiceProxy twice in the same PowerShell console. I use log4net to write the console messages so this has the potential to be quite extendable. Then Stan needed to run PowerShell scripts from msbuild and was complaining to me about it over twitter. He didn’t like the hacky solution I had then. Eventually I realized this was the way to simplify my previous solution. So download the zip file. Try it out. Complain to me when you find bugs! TLDR; Unlike powershell.exe -file foo.ps1, which uses the shared powershell.exe.config, poshrunner.exe foo.ps1 uses foo.ps1.config, for great justice. Download it now! ### Nate Berry #### Warcraft III on Ubuntu 12.04 Installing Ubuntu on the MacBook recently, I knew there would be a bunch of OSX programs I would no longer be able to run but I was pretty confident that I’d be able to get some Windows programs going with wine. Having had good luck with Temple of Elemental Evil on the Elitebook last December, [...] ## November 21, 2012 ### Eitan Adler #### Finding the majority element in a stream of numbers Some time ago I came across the following question. As input a finite stream stream of numbers is provided. Define an algorithm to find the majority element of the input. The algorithm need not provide a sensible result if no majority element exists. You may assume a transdichotomous memory model. There are a few definitions which may not be immediately clear: Stream A possibly infinite set of data which may not be reused in either the forward or backward direction without explicitly storing it. Majority element An element in a set which occurs more than half the time. Unfortunately this answer isn't of my own invention, but it is interesting and succinct. The algorithm (click to view) Using 3 registers the accumulator, the guess and the current element (next): 1. Initialize accumulator to 0 2. Accept the next element of the stream and place it into next. If there are no more elements go to step #7. 3. If accumulator is 0 place next into guess and increment accumulator. 4. Else if guess matches next increment accumulator 5. Else decrement accumulator 6. Go to step 2 7. Return the value in guess as the result An interesting property of this algorithm is that it can be implemented in$O(n)$time even on a single tape Turing Machine. ## November 08, 2012 ### Justin Dearing #### Visual Studio 2010 and VisualStudio.com TFS Hosting Yesterday, Stan, the founder of this blog, gave me a link to a project host on the Team Foundation Service (visual studio.com). I tried to connect to it with Visual Studio 2010. it simply refused to work. After much annoyance, he asked me to try adding the TFS server to Visual Studio 2012 and it worked (Why didn’t I think of that?). Eventually I figured out that I needed to install the Visual Studio 2010 SP1 Team Foundation Server 2012 Compatibility GDR (KB2662296). Then I was able to add the solution to Visual Studio 2010. It seems there are several updates for Visual Studio 2010 SP1, some specifically dealing with Windows 8 compatibility. Unfortunately Windows update does not prompt me to install them. I will search for and install these tonight to prevent future issues. #### Windows Internals Study Group – First meeting Last night myself and two others had out first planning meeting via google huddle for our Windows Internals Study group. We will be meeting next Wednesday 2012-11-17 at 20:30 EST to discuss the first two chapters of Windows Internals 6th edition. If you still want to participate its not too late, just let me know. One thing we decided was to make all our notes public. Right now they are being stored in a google drive shared folder that is publicly accessible. The information there will grow with time. ## November 04, 2012 ### Justin Dearing #### My ConEmu Tasks Update: Thanks to the author of ConEmu for some constructive feedback in the comments! I’ve fallen in love with ConEmu, after being introduced to it by a Scott Hanselman blog post. While my initial attraction to it was its integration with farmanager, that was only its initial selling point. The ability to have one resizable tabbed window to hold all my cmd.exe, powershell.exe and bash.exe consoles is what makes it a killer app. While 90% of my command line needs are met by a vanilla powershell.exe or far.exe command line, sometimes I need my environment set up in another way. For example, sometimes I need the vcvarsall.bat run that sets the environment up for a particular version of Visual Studio or the Windows SDK. Also, on occasion I will need to fire up the mingw bash instance that comes with git. (Generally, this is only to run sshkeygen.exe). Finally, while a 64 bit instance of PowerShell 3.0 captures 99% of my PowerShell needs, I want easy access to both 32 and 64 bit versions of the PowerShell 1.0, 2.0 and 3.0 environments. So I set all these up in ConEmu, and exported the registry keys into a gist repository, and now I share them here as a pastebin (because githup lacks syntax highlighting): ## Breakdown of settings All the settings are stored in subkeys of HKEY_CURRENT_USER\Software\ConEmu\.Vanilla\Tasks with the name TaskN where N is a base 10 integer. In addition, that key has a DWORD value named Count that contains the number of tasks. Each task contains the following key/value pairs: • Name This is the name of the task • GuiArgs These are the ConEmu configuration options. In all my cases I use /single /Dir %userprofile%. The /single flag adds the task to the currently open ConEmu instance, if there is one. The /Dir %userprofile% switch sets the working directory to my home directory. • CmdN These are the commands executed. Each one represents a command executed in a new tab by a task. All the tasks I have configured execute exactly one tab. • CountThis is the number of tabs opened by this command. Like the TaskN keys, it should match up to the number CmdN Values in a Task. • Active I set this to zero, but if you have a task that open multiple tabs you can set this to the tab number you want activated after running that task. For example, If you have a task with a Cmd1 that opens mongod.exe, and a Cmd2 that opens up an instance of tail.exe tailing the mongo logs, setting Active to 2 will display the console running the tail. ConEmu’s true power is your ability to customize it, so by showing you how I customized it, I hope you have found it to be more powerful. ## November 01, 2012 ### Justin Dearing #### Windows Internals Google Group Created I’ve made a google group to talk about the Windows Internals Study Group I proposed. Lets move the conversation there. ## October 31, 2012 ### Eitan Adler #### Finding the min and max in 1.5n comparisons A friend of mine recently gave me the following problem: Given an unsorted set of numbers find the minimum and maximum of set in a maximum of$1.5n$comparisons. My answer involves splitting the list up pairwise and finding the result on the only half of the set. 1. Go through list and compare every even index to its immediate right (odd) index. Sort each pair numerically within itself. This step takes$\dfrac{1}{2}n$comparisons. 2. Find the minimum of every odd index and find the maximum of every even element using the typical algorithm. This step takes$n$comparisons. Note that this could be done in one pass by doing the pair comparison and the min/max comparison in one pass. Is there a better way? #### Blogging my way through CLRS Section 11.1 (edition 2) I've taken a brief break from blogging about my Cormen readings but I decided to write up the answers to chapter 11. Note that the chapters and question numbers may not match up because I'm using an older edition of the book. Question 11.1-1: Suppose that a dynamic set$S$is represented by a direct address table$T$of length$m$. Describe a procedure that finds the maximum element of$S$. What is the worst case performance of your procedure? Assuming the addresses are sorted by key: Start at the end of the direct address table and scan downward until a non-empty slot is found. This is the maximum and if not: 1. Initialize$max$to$-\infty$2. Start at the first address in the table and scan downward until a used slot is found. If you reach the end goto #5 3. Compare key to$max$. If it is greater assign it to$max$4. Goto #2 5. Return$max$The performance of this algorithm is$\Theta(m)$. A slightly smaller bound can be found in the first case of$\Theta(m - max)$Question 11.1-2: Describe how to use a bit vector to represent a dynamic set of distinct elements with no satellite data. Dictionary operations should run in$O(1)$time. Initialize a bit vector of length$|U|$to all$0$s. When storing key$k$set the$k$th bit and when deleting the$k$th bit set it to zero. This is$O(1)$even in a non-transdichotomous model though it may be slower. Question 11.1-3: Suggest how to implement a direct address table in which the keys of stored elements do not need to be distinct and the elements can have satellite data. All three dictionary operations must take$O(1)$time. Each element in the table should be a pointer to the head of a linked list containing the satellite data.$nul$can be used for non-existent items. Question 11.1-4: We wish to implement a dictionary by using direct addressing on a large array. At the start the array entries may contain garbage, and initializing the entire array is impractical because of its size. Describe a scheme for implementing a direct address dictionary on the array. Dictionary operations should take$O(1)$time. Using an additional stack with size proportional to the number of stored keys is permitted. On insert the array address is inserted into a stack. The array element is then initialized to the value of the location in the stack. On search the array element value is to see if it is pointing into the stack. If it is the value of the stack is checked to see if it is pointing back to the array.[1] On delete, the array element can be set to a value not pointing the stack but this isn't required. If the element points to the value of the stack, it is simply popped off. If it is pointing to the middle of the stack, the top element and the key element are swapped and then the pop is performed. In addition the value which the top element was pointing to must be modified to point to the new location Question 11.2-1: Suppose we have use a hash function$h$to hash$n$distinct keys into an array$T$of length$m$. Assuming simple uniform hashing what is the expected number of collisions? Since each new value is equally likely to hash to any slot we would expect$n/m$collisions. Question 11.2-2: Demonstrate the insertion of the keys:$5, 28, 19, 15, 20, 33, 12, 17, 10$into a hash table with 9 slots and$h(k) = k \mod{9}$[2] hash values 1 28 -> 19 -> 1 2 20 3 12 5 5 6 15 -> 33 17 8 Question 11.2-3: If the keys were stored in sorted order how is the running time for successful searches, unsuccessful searches, insertions, and deletions affected under the assumption of simple uniform hashing? Successful and unsuccessful searches are largely unaffected although small gains can be achieved if if the search bails out early once the search finds a key later in the sort order than the one being searched for. Insertions are the most affected operation. The time is changed from$\Theta(1)$to$O(n/m)$Deletions are unaffected. If the list was doubly linked the time remains$O(1)$. If it was singly linked the time remains$O(1 + \alpha)$Question 11.2-4: Suggest how storage for elements can be allocated and deallocated within the ash table by linking all unused slots into a free list. Assume one slot can store a flag and either one element or two pointers. All dictionary operations should run in$O(1)$expected time. Initialize all the values to a singly linked free list (flag set to false) with a head and tail pointer. On insert, use the memory pointed to by the head pointer and set the flag to true for the new element and increment the head pointer by one. On delete, set the flag to false and insert the newly freed memory at the tail of the linked list. Question 11.2-5: Show that if$|U| > nm$with$m$the number of slots, there is a subset of$U$of size$n$consisting of keys that all hash to the same slot, so that the worst case searching time for hashing with chaining is$\Theta(n)$Assuming the worst case of$|U|$keys in the hash tabe assuming the optimial case of simple uniform hashing all m slots will have$|U|/m = n$items. Removing the assumption of uniform hashing will allow some chains to become shorter at the expense of other chains becoming longer. There are more items then the number of slots so at least one slot must have at least$n$items by the pigeon hole principle. Question 11.3-1: Suppose we wish to search a linked list of length$n$, where every element contains a key$k$along with a hash value$h(k)$. Each key is a long character string. How might we take advantage of the hash values when searching the list for an element of a given key? You can use$h(k)$to create a bloom filter of strings in the linked list. This is an$\Theta(1)$check to determine if it is possible that a string appears in the linked list. Additionally, you can create a hash table of pointers to elements in the linked list with that hash value. this way you only check a subset of the linked list. Alternatively, one can keep the hash of the value stored in the linked list as well and compare the hash of the search value to the hash of each item and only do the long comparison if the hash matches. Question 11.3-2: Suppose that a string of length$r$is hashed into$m$slots by treating it as a radix-128 number and then using the division method. The number$m$is easily represented as a 32 bit word but the string of$r$character treated as a radix-128 number takes many words. How can we apply the division method to compute the hash of the character string without using more than a constant number of words outside of the string itself? Instead of treating the word as a radix-128 number some form of combination could be used. For example you may add the values of each character together modulus 128. Question 11.3-4: Consider a hash table of size$m = 1000$and a corresponding hash function$h(k) = \lfloor m (k A \mod{1})\rfloor$for$ A = \frac{\sqrt{5} - 1}{2}$Compute the locations to which the keys 61, 62, 63, 64, 65 are mapped. key hash 61 700 62 318 63 936 64 554 65 172 #### Blogging my way through CLRS section 3.1 [part 5] Part 4 here. I wrote an entire blog post explaining the answers to 2.3 but Blogger decided to eat it. I don't want to redo those answers so here is 3.1: For now on I will title my posts with the section number as well to help Google. Question 3.1-1: Let$f(n)$and$g(n)$be asymptotically non-negative functions. Using the basic definition of$\theta$-notation, prove that$\max(f(n) , g(n)) \in \theta(f(n) + g(n))$. CLRS defines$\theta$as$\theta(g(n))= \{ f(n) :$there exists some positive constants$c_1, c_2$, and$n_0,$such that$0 \leq c_1g(n) \leq f(n) \leq c_2g(n)$for all$n \geq n_0\}$Essentially we must prove that there exists some$c_1$and$c_2$such that$c_1 \times (f(n) + g(n)) \leq \max(f(n), g(n)) \leq c_2 \times (f(n) + g(n))$There are a variety of ways to do this but I will choose the easiest way I could think of. Based on the above equation we know that$\max(f(n), g(n)) \leq f(n) + g(n)$(as f(n) and g(n) must both me non-negative) and we further know that$\max(f(n), g(n))$can't be more than twice f(n)+g(n). What we have then are the following inequalities: $$\max(f(n), g(n)) \leq c_1 \times (f(n) + g(n))$$ and $$c_2 \times (f(n) + g(n)) \leq 2 \times \max(f(n), g(n))$$ Solving for$c_1$we get 1 and for$c_2$we get$\frac {1} {2}$Question 3.1-2: Show for any real constants$a$and$b$where$b \gt 0$that$(n+a)^b \in \theta(n^b)$Because$a$is a constant and the definition of$\theta$is true after some$n_0$adding$a$to$n$does not affect the definition and we simplify to$n^b \in \theta(n^b)$which is trivially true Question 3.1-3: Explain why the statement "The running time of$A$is at least$O(n^2)$," is meaningless. I'm a little uncertain of this answer but I think this is what CLRS is getting at when we say a function$f(n)$has a running time of$O(g(n))$what we really mean is that$f(n)$has an asymptotic upper bound of$g(n)$. This means that$f(n) \leq g(n)$after some$n_0$. To say a function has a running time of at least g(n) seems to be saying that$f(n) \leq g(n) \And f(n) \geq g(n)$which is a contradiction. Question 3.1-4: Is$2^{n+1} = O(2^n)$? Is$2^{2n} = O(2^n)$?$2^{n+1} = 2 \times 2^n$. which means that$2^{n+1} \leq c_1 \times 2^n$after$n_0$so we have our answer that$2^{n+1} \in o(2^n)$Alternatively we could say that the two functions only differ by a constant coefficient and therefore the answer is yes. There is no constant such that$2^{2n} = c \times 2^n$and thefore$2^{2n} \notin O(2^n)$Question 3.1-5: Prove that for any two functions$f(n)$and$g(n)$, we have$f(n) \in \theta(g(n)) \iff f(n) \in O(g(n)) \And f(n) \in \Omega(g(n))$This is an "if an only if" problem so we must prove this in two parts: Firstly, if$f(n) \in O(g(n))$then there exists some$c_1$and$n_0$such that$f(n) \leq c_1 \times g(n)$after some$n_0$. Further if$f(n) \in Omega(g(n))$then there exists some$c_2$and$n_0$such that$f(n) \geq c_2 \times g(n)$after some$n_0$. If we combine the above two statements (which come from the definitions of$\Omega$and O) than we know that there exists some$c_1, c_2, and n_0,$such that$c_1g(n) \leq f(n) \leq c_2g(n)$for all$n \geq n_0\}$We could do the same thing backward for the other direction: If$f(n) \in \theta(g(n))$then we could split the above inequality and show that each of the individual statements are true. Question 3.1-6: Prove that the running time of an algorithm is$\theta(g(n)) \iff$its worst-case running time is$O(g(n))$and its best case running time$\Omega(g(n))$. I'm going to try for an intuitive proof here instead of a mathematical one. If the worst case is asymptotically bound above in the worst case by a certain function and is asymptotically bound from below in the best case which means that the function is tightly bound by both those functions. f(n) never goes below some constant times g(n) and never goes above some constant times g(n). This is what we get from the above definition of$\theta(g(n)))$A mathematical follows from question 3.1-5. Question 3.1-7: Prove that$o(g(n)) \cap \omega(g(n)) = \varnothing$little o and little omega are defined as follows: $o(g(n)) = \{ f(n) : \forall c > 0 \exists n_0 \text{such that } 0 \leq f(n) \leq c \times g(n) \forall n \gt n_0$ and $\omega(g(n)) = \{ f(n) : \forall c > 0 \exists n_0 \text{such that } 0 \leq c \times g(n) \leq f(n) \forall n \gt n_0$ In other words $$f(n) \in o(g(n)) \iff \lim_{n \to \infty} \frac {f(n)} {g(n)} = 0$$ and $$f(n) \in \omega(g(n)) \iff \lim_{n \to \infty} \frac {f(n)} {g(n)} = \infty$$ It is obvious that these can not be true at the same time. This would require that$0 = \infty$#### Blogging my way through CLRS [part 4] Part 3 here This set is a bit easier than last time. Question 2.2-1:Express the function $$\frac{n^3}{1000} - 100n^2 - 100n + 3$$ in terms of$\Theta$notation A function g(x) is said to be in the set of all functions$\Theta(x)$if and only if g(x) is also in the set of all functions$\Omega(x)$and in the set of all functions$O(x)$. Symbolically: $$g(x) \in \Theta(x) \iff g(x) \in O(x) \And g(x) \in \Omega(x)$$ A function g(x) is in the set of all functions$\Theta(x)$if and only if after some constant$c$it is always true that for some constant C,$g(x) \lt Cf(x)$A function g(x) is in the set of all functions O(x) if and only if after some constant$c$it is always true that for some constant C,$g(x) \gt Cf(x)$With our function we could choose practically any function to satisfy either one of these conditions. However we need to satisfy both of them. One thing that makes this easier is that it only has to be true after some constant number. This allows us to throw away the "trivial" parts that are eventually overwhelmed by the faster growing terms. We therefore are only left with$n^3$, which is the answer. Question 2.2-2: Consider sorting n numbers stored in an array A by first finding the smallest element and exchanging it with the element in A[1], then find the second smallest element and exchange it with A[2], and continue this for the first n-1 elements of A. Write the pseudocode for this algorithm, which is known as Selection Sort. What loop invariant does this algorithm maintain? Why does it need to run only for the first n-1 elements and not for all n? Give the best case and worst case running times in$\Theta$notation This question is asking us to analyze selection sort in a variety of ways. I will start with writing out the pseudocode: for$j \leftarrow 1$to$n-1$min$\leftarrow$j for$i \leftarrow j+1$to$n\rhd$if A[i] < A[min] then min$\leftarrow$i$\rhd$if min$\neq$j then swap A[min] and A[j] A loop invariant that this algorithm maintains is that every elements prior to A[j] is sorted among the subarray A[1] to A[j] and is less than or equal to every element in the subarray A[j+1] to A[n]. I do not believe a stronger loop invariant is provable. The algorithm only needs to run until n-1 because of the second part of the loop invariant. When$j = n-1$we know that every element after A[j], which is A[n] is not less than all previous elements. Therefore no check has to be done. In the best case (an already sorted array) and in the worst case (a reverse sorted array) the running time is the same:$\Theta(n^2)$Question 2.2-3: Consider linear search again. How many elements of the input sequence need to be checked on average, assuming that the element being searched for is equally likely to be any element in the array? How about in the worst case? What are the average-case and worst-case running times of linear search in$\Theta$notation? The best case for a linear search algorithm is when the searched-for element is in the first location. In the worst case all n locations must be searched. In the average case$\frac{n}{2}$locations have to be searched. Question 2.2-4: How can we modify almost any algorithm to have a good best-case running time? I have no idea what this question is asking for. I guess checking for the optimal case (as in a pre-sorted array for a sorting algorithm) and then skipping the rest of the procedure might work. #### Blogging my way through CLRS [3/?] part 2 here According to wikipedia Introduction to Algorithms is also known as CLRS which is shorter (and more fair to the other authors) so I'll use that name for now on. Question 2.1-1 asks me to redraw a previous diagram, but with different numbers. I am not going to post that here. Question 2.1-2 Rewrite the insertion sort procedure to sort into nonincreasing instead of nondecreasing order: Here is the pseudocode of the nonincreasing version of insertion sort: for j$ \leftarrow 2$to length[A] do key$ \leftarrow A[j]\rhd$Insert A[j] into sorted sequence A[1..j-1]$ i \leftarrow j - 1$while$i \gt 0$AND$A[i] \lt key$do$A[i+1] \leftarrow A[i]i \leftarrow i - 1A[i+1] \leftarrow key$Now we prove that this loop correctly terminates with a nonincreasing array to about the same level of formality as the book proved the original. Initialization: At the first iteration, when$j=2$the subarray A[1..j-1] is trivially sorted (as it has only one element). Maintenance: In order to prove maintenance we need to show that the inner loop correctly terminates with an array with "space" for the correct element. As CLRS did not prove this property, I will also skip this proof. Termination: this loop terminates when j > length[A] or when$j = length[A]+1$. Since we have "proven" (to some level) the maintenance of the loop invariant (that at each point during the loop the subarray [1..j-1] is sorted) we could substitute length[A]+1 for$j$which becomes [1..length[A]] or the entire array. This shows that the loop terminates with a correctly sorted array. Question 2.1-3: Input:A sequence of$n$numbers$A = {a_1,a_2,...,a_n}$and a value$v$. Output: An index i such that$v = A[i]$or a special value$\varnothing$(NIL) if$v \notin A$Write the pseudocode for Linear Search, which scans through the sequence looking for$v$. Using a loop invariant, prove that your algorithm is correct. The first part, writing the pseudocode, seems fairly easy:$r \leftarrow \varnothingj \leftarrow 1$to length[A] if$v = A[j] \rhd$optionally check that$r = \varnothingr \leftarrow j$return$r$The second part, proving that this is correct is harder than before because we don't have a trivially true initialization of our loop invariant. Initialization:$j = 1\ \And\ r = \varnothing$at the start of our loop. At this point there are no elements prior to A[j] and we have yet to find$v$in A. As such our invariant (that r will contain the correct value) is true. Maintenance: At every point in the loop the subarray A[1..j] has either contained$v$in which case it has been assigned to$r$or has not contained$v$in which case$r$remains$\varnothing$. This means that loop invariant holds for every subarray A[1..j]. Termination: At the end of the loop$j = $length[A]. We know from our maintenance that$r$is correct for every subarray A[1..j] so at termination$r$contains the correct value Question 2.1-4 Consider the problem of adding two$l$-bit binary integers, stored in two$l$-element arrays$A$and$B$. the sum of the two integers should be stored in binary form in$(l+1)$-element array$C$. State the problem formally and write pseudocode for adding the integers. Stating the problem formally looks something like: Input: Two$l$-bit integers$A$and$B$stored as arrays of length$l$with the most significant bit stored last Output: An$l+1$-bit integer ($C$) stored as arrays of length$l+1$with the most significant bit stored last Here is the pseudocode:$\rhd$X is a$l$-bit array of bits initialized to all zeros in order to store the carry for j$\leftarrow$1 to$lC[j] \leftarrow copyC \leftarrow A[j] \oplus B[j]X[j+1] \leftarrow A[j] \And B[j]C[j] \leftarrow C[j] \oplus X[j] X[j+1] \leftarrow copyC \oplus X[j+1] $#### Blogging my way through Cormen [2/?] As I said in part 1 I am reading a book on algorithms and have decided to blog my way through. My primary goal in doing so is to improve my writing skills. A secondary goal is to force myself to actually answer the questions. 1.2-2 Suppose we are comparing implementations of insertion sort and merge sort on the same machine. For inputs of size n, insertion sort runs in$8n^2$steps, while merge sort runs in$64n \lg n$steps. For which values of$n$does insertion sort beat merge sort? The question is essentially asking for which values of$n$is$8n^{2} \lt 64n \lg n$. We can solve this question by first factoring out an$8n$and we get$n \lt 8 \lg n$Unfortunately this problem is not solvable using elementary operations. Luckily we are being asked for an integer solution (as computers operate in discrete steps) and we could use the underutilized guess-and-and method.$n8 \lg n$14 30.46 41 42.86 43 43.41 44 43.675 So there we have it: given this data we would prefer insertion sort whenever we have fewer than 43 items. 1.2-3 What is the smallest value of n such that an algorithm whose running time is$100n^2$runs faster than an algorithm whose running time is$2^n$on the same machine. This question is asking us find the smallest positive integer$n$that satisfies$100n^{2} \lt 2^n$. This could be solved by doing the math, by looking at a plot of the curves, or using the above method again. . $$2^{14} = 16384$$ $$(100 \times 14^{2}) = 19600$$ $$2^{15} = 32768$$ $$(100 \times 15^{2}) = 22500$$ An exponential algorithm becomes far worse than a polynomial algorithm when we have only 15 items to worry about. In other words: avoid exponential algorithms! Thank you JT and JM for giving me the idea to go through the book, and for looking at my posts before I publish them. Updated 2012-09-05: I had a brain lapse the day I originally published this and accidentally used the natural logarithm instead of the base 2 log for question 1.2-2. How I ever managed to do that I will not know, but I've fixed it. #### Google translate proxy no longer available One old trick to bypass simple domain based filters was to use Google translate on the domain and go from English to English (or the native language to the native language - whatever it might be). I recently came across a link that happen to be using Google translate in that way a (I'm not sure why) and I got an error from Google "Translation from English into English is not supported." When I tried with other languages I got similar errors. Translating to other languages works as usual. Luckily this trick is not really needed as there are thousands of available proxies or one could just make their own. #### Cneonction: closed HTTP header When you make a request to certain websites you may find an unusual header that looks a little strange: [8000 eitan@radar ~ ]%curl -I http://www.imdb.com/ 2>/dev/null|grep close Cneonction: close [8001 eitan@radar ~ ]%curl -I http://maps.apple.com/ 2>/dev/null|grep close Cneonction: close This isn't a typo though. Some load balancers that sit between the web server and end user want to implement HTTP keep-alive without modifying the back end web server. The load balancer therefore has to add "Connection: Keep-Alive" to the HTTP header and also has to elide the "Connection: close" from the real webserver. However, if it completely removes the line the load balancer (acting as a TCP proxy) would have to stall before forwarding the complete text in order to recompute the TCP checksum. This increases latency on packet delivery. Instead, the proxy uses a hack to keep the checksum unchanged. The TCP checksum of a packet is the 1s complement summation of all the 16 bit words (the final word might be right padded with zeros).[1] By manipulating the ordering, but not the content of the header the proxy can avoid changing the TCP checksum except by the fixed amount that the "Connection: Keep-Alive" adds (2061). In particular: >>>sum(ord(i) for i in "Connection") - sum(ord(i) for i in "Cneonction") 0 This reordering also keeps the packet size the same. Edit 2012-10-31: Make the RFC a link and remove pointless "2>&1" Thanks abbe for the inspiration! Thanks wxs for the proofreading. ## October 29, 2012 ### Justin Dearing #### Windows Internals Study Group Proposal UPDATE: I’ve made a google group to move the discussion to. While most of the code I’ve written to put food on the table has been for application development, I’ve always had a true passion for system development. Its a bit meta, sort of like being the mechanical engineer that just wants to make a better ratchet wrench. However, I think its important to understand, and if given the opportunity, help write the code that runs and supports the code that makes the end users productive. As a result of this passion, I’ve been reading the 5th edition of Windows Internals by Windows Hacker extraordinaires Mark E. Russinovich and David A. Solomon. This edition covers Windows Vista and Windows Server 2008. While its been enlightening, and the exercises have helped reinforce the concepts, the book is still somewhat academic. It’s also not a book about programming or a book about system administration. Its a book that teaches concepts. Some of these concepts mesh well with my previous knowledge and are directly applicable to things I’ve done or want to do. I retain these concepts well. Other concepts, such as the finer points of windows objects, are not things I can relate to other concepts (yes I see the unix everything is a file parallel, but I don’t see the dd if=/dev/foo of=/tmp/foo.img parallel that makes it useful). So I decided to form a study group. I’ve done this before for the ZCE in PHP5, with the Long Island PHP user group. Here is what I am proposing. All interested parties contact me via the comments or at @zippy1981 on twitter. We will meet once a week, each week covering a different chapter. The expectation as participants would be as follows: ### Before the meeting • Do the assigned homework for the previous chapter • Read the chapter. Take notes of what you did not understand. Make a list of hyperlinks and deadtree references that you used to supplement the chapter, if any. • Do all the exercises with the prescribed Windows System Internals tools and Windows Debugger (these are all free downloads). • Do the exercises with equivalent third party tools including: ### During the Meeting • Take a turn in leading on of the meetings. I suggest picking the chapter you are most intimidated by, not the one you are most confident in. This is a reinforcement tool. • Present your homework from the previous chapter. We will go around the room. Depending on the size of the group and scope of the assignment we might only have a subset of the group volunteer to present or break into small groups. • Discuss the chapter of the week. Share what you didn’t understand, help others who didn’t understand things that you did understand. (the whole point of a study group) • Last weeks discussion leader will demonstrate this weeks exercises using both the sysinternals tools and the third party ones we agree upon. • The group leader will assign us a homework for next week, • Next weeks discussion leader will act as secretary and do the following: • Collect everyone’s links to post to a wiki we will maintain. • Take minutes of what we discussed in the meetings for the wiki. • Where appropriate, filing bugs and feature requests for third party tools that lack the functionality to do the exercises. ### Grading Now, this isn’t high school, and we’re not getting graded. No one will chastise you if you don’t go to all the sessions or do all the homework. We all have jobs, friends, families, and sometimes a study group isn’t the most important thing. I expect the class to consist of mostly adults, but welcome any teenagers that think they can handle the material. Therefore, I will treat you all like adults. That being said, I do want to run this as a pass fail course and give out certificates. Unlike the PHP ZCE study class I ran, there is no clear external goal. There is no certification on Windows Internals, except perhaps as parts of instructor lead courses. Passing will consist of doing all the homework, and actively participating in all the classes. If you miss a class, you can make it up my meeting later in the week with at least one other member of the study group. While a certificate seems kind of corny, especially coming from as unaccredited and aprestigious a body as the one we will be forming, I feel this small carrot will help with commitment. ### Location There is no reason this can’t work remotely. I will not turn anyone down because they can only attend via skype. However, I’d like at least some of us to meet in person. I live in Jersey City and work in Hoboken. I’ll travel to Manhattan, Brooklyn, and Hudson, Bergen and Essex counties, or perhaps somewhere a little farther if I can find a really convenient train. We can certainly have multiple physical meeting locations (e.g. a group of people from Chicago meet there and I meet with some people in Hoboken). Ideally, I’d like a meeting facility with a projector for the group location((s). We’d probably need to use webex if we are all not in the same room. ### What edition? I’m reading the 5th edition (because I happen to own it). The 6th edition covers Windows 7 and Windows Server 2008R2. Its also a two volume edition. so its more expensive. I need to research if there will be an addition for Windows 8. I’m proposing at this point the 6th Edition, because part 2 of the 6th edition was just released. ### Flexibility My proposal is mostly based on what worked for the PHP cert. This will be quite different though. We might want to break some chapters in half, or dedicate two meetings to a chapter. I hope to get enough interest to make this happen. I think this could work out really well and prove to be a great learning experience for everyone. ## October 23, 2012 ### Nate Berry #### Ubuntu breathes new life into old MacBook Last year when I got an HP EliteBook for work I thought my days with the old MacBook were numbered. The MacBook isn’t that old, its a 2009 Core 2 Duo aluminum body 13″ model, but the EliteBook’s iCore 5 was is faster. The Mac screen is better, but not by very much. Both processors [...] ## October 22, 2012 ### Justin Dearing #### Excel Link Dump I was recently told I had to train one of my companies clients to access our API’s using Excel. This ended up being a miscommunication, and the client wanted to use C#. However, I spent a day re-learning Excel VBA before I got the correct information. Relearning involved writing a simple app with out API and doing a lot of googling to fill in the large gaps in my knowledge. I decided to post the list of useful links I found here, mostly for my own reference. If I ever have to touch Excel again, I’ll add to this list, and perhaps curate it a little better. Perhaps I might be inspired to rewrite the Excel Reverse DNS macro I wrote many years ago. ## October 21, 2012 ### Wes #### Drush: the Drupal shell utility Drush while not strictly a module, is listed on drupal.org as a module for lack of a better classification. Drush is a set of shell scripts that can be used to better manage drupal installations because it actually does a limited bootstrap of the drupal environment, connecting to the database that it finds available in the nearest settings.php file. ## Topics: ## October 20, 2012 ### Wes #### Drupal Best Practices (or just good ideas) My notes from a presentation I gave at the groups.drupal.org/long-island meeting Q. Why Is it important to practice good programming habits? A. Following best practices, and thoughtful planning from the beginning, will ensure a well received project outcome, limiting mistakes while speeding development. ## October 16, 2012 ### Justin Dearing #### SQLCLR wrapper for RAISERROR() Recently I was writing a SQLCLR stored procedure. I made several calls to RAISERROR() in the procedure. While this meant that my stored procedure would be easier to debug in the future because of useful error messages, there was a lot of ceremony involved in these RAISERROR() calls. Therefore I decided to encapsulate all this ceremony into a simple static method. RAISERROR() presents a small challenge in wrapping in a C# function. That challenge is the variable parameter length due to the printf() like parameter substitution. The first step to overcoming this is to use the params keyword for the substitution parameters. The second step is avoiding the possibility of SQL Injection. Many built in CLR functions use the params keyword. For example, if you use the String.Format(), Console.WriteLine() or Debug.Print(), with substitution parameters you have used overloads with the params keyword. For example, lets look at these two calls to String.Format(); String.Format("Name: {0}", "Justin"); String.Format("Name: {0} Occupation: {1}", "Justin", "Developer"); It may look like I’m using two overloads, but I am not. One might also conclude that String.Format() takes one string and eleventy billion object parameters all of which have a default of null. However, parameter defaults are new in .NET 3.5, and even the long forgotten .NET Framework 1.0 had a fully functional String.Format(). In actuality, the signature for String.Format() is as follows: public static string Format(string format, params Object[] args) Basically, the params keyword lets the user pass a variable size list of parameters, and populates them in a singe argument as an array. So now that we know about params, it seems like it shouldn’t be too hard to write our function lets give it a go. private static void RaisError(SqlConnection cn, string message, short severity, short state, params object[] args) { . . . } Now of course you want to protect against SQL injection. The best way to do that is to use the SqlCommand.Parameters collection to scrub the parameters. While you may be very clever, the ADO.NET code is very well battle tested. The only problem is if you don’t know how many parameters you are passing, how do you know how to make the string. The answer, is to generate SqlCommand.CommandText dynamically. This may seem like you are opening yourself up for SQL injection. However, you aren’t. You simple have to generate a string that contains @arg1, @arg2, @arg3 . . . based on the length of the object array that gets passed to params. You then populate the SqlCommand.Parameters collection with the values of that array. If you don’t feel comfortable with this, then perhaps a stackoverflow answer with 268 votes at the time I wrote this article advocating this practice would convince you. So putting it all together here is our function: The above function is copyright 2012 myself (Justin Dearing). However, I release it under the MIT license, so feel free to use it in any of your code. ## October 10, 2012 ### Eitan Adler #### Reduced Entropy in rand() and random() TL;DR: Don't rely on undefined behavior, even when you think it should work. I recently reported a minor issue to the FreeBSD security team. The libc random functions had code1,2 designed to run when /dev/random is not available. This can easily occur in a chroot or jail environment. if (!done) { struct timeval tv; unsigned long junk; gettimeofday(&tv, NULL); srandom((getpid() << 16) ^ tv.tv_sec ^ tv.tv_usec ^ junk); return; } This code is designed provide a minimal amount of entropy in the "failure" case. Unfortunately, it doesn't even provide the entropy it claims to. This is a minor issue because getpid, getimeday, and a single long variable don't provide a lot of entropy in the first place: (only$log_2{sizeof(long)}$bits). The point of the junk value is to add entropy by using uninitialized memory. This relies on the compiler being "stupid" enough not optimize it away. Unfortunately clang and newer versions of gcc are smart enough to use the undefined behavior in undesired ways. clang 3 removes any computation which relies on the undefined behavior and so produces the following object code: af0: e8 5f fc ff ff callq 754 <gettimeofday@plt> af5: e8 7a fc ff ff callq 774 <getpid@plt> afa: e8 65 fc ff ff callq 764 <srandom@plt> Note that the junk variable is entirely unused and that the xor operation between gettimeofday and getpid is non-existent. gcc 4.6 4 outputs: ce8: e8 03 fa ff ff callq 6f0 <gettimeofday@plt> ced: e8 4e fa ff ff callq 740 <getpid@plt> cf2: 48 8b 7c 24 08 mov 0x8(%rsp),%rdi cf7: 48 33 3c 24 xor (%rsp),%rdi cfb: c1 e0 10 shl$0x10,%eax
cfe:   48 98                   cltq
d00:   48 31 c7                xor    %rax,%rdi
d03:   e8 28 fa ff ff          callq  730 <srandom@plt>

Note that in this case the junk value appears to be (%rsp) which isn't all that random.

gcc 4.25 produces the following code

with the junk variable
d9f:   e8 18 fa ff ff          callq  7bc <gettimeofday@plt>
da4:   e8 43 fa ff ff          callq  7ec <getpid@plt>
da9:   48 8b 3c 24             mov    (%rsp),%rdi
dad:   48 33 7c 24 08          xor    0x8(%rsp),%rdi
db2:   c1 e0 10                shl    $0x10,%eax db5: 48 98 cltq db7: 48 31 c7 xor %rax,%rdi dba: 48 31 df xor %rbx,%rdi dbd: e8 1a fa ff ff callq <srandom@plt> and without: d9f: e8 18 fa ff ff callq 7bc <gettimeofday@plt> da4: e8 43 fa ff ff callq 7ec <getpid@plt> da9: 48 8b 3c 24 mov (%rsp),%rdi dad: 48 33 7c 24 08 xor 0x8(%rsp),%rdi db2: c1 e0 10 shl$0x10,%eax
db5:   48 98                   cltq
db7:   48 31 c7                xor    %rax,%rdi
dba:   e8 1d fa ff ff          callq  7dc <srandom@plt>

The base version of gcc isn't vulnerable. However, with the upcoming switch to the clang compiler the FreeBSD does become vulnerable.

The first proposed fix was to add the volatile type qualifier to the junk variable. While this seemed to fix the code generation issue I didn't believe this to be a valid fix as the behavior is still undefined6 (I misread text of the standard). Additionally, the value is likely have be very predictable. A preference was expressed to remove the junk variable as using it may leak a small amount of stack data.

I proposed the simple and obvious fix of removing the use of the junk variable7

In a brief survey of other libraries I noticed similar issues. I will attempt to notify the vendors

It should be obvious, but undefined behavior is undefined and can't be relied on to ever to give a sensible result.

1. random.c r165903 (line 316)
2. rand.c r241046 (line 131)
3. FreeBSD clang version 3.1 (branches/release_31 156863) 20120523 compiled with -O2 or -O3
4. gcc46 (FreeBSD Ports Collection) 4.6.4 20120608 (prerelease) compiled with -O2
5. gcc (GCC) 4.2.1 20070831 patched [FreeBSD] compiled with -O3
6. sections 5.1.2.2.3 and 6.7.2.4 of ISO9899
7. svn commit r241373
Edit 2010-10-10: update the paragraph referring to undefined behavior of volatile.

## October 09, 2012

#### Some git-svn notes

When working with a subversion repository I often miss the use git features. However, it is possible for git to speak the subversion protocol:

$git svn rebase #### Committing the final patch There are a lot of workflows for this, but I prefer the cherry-pick approach:$git checkout master
$git svn rebase$git cherry-pick commit-id
$git svn dcommit #### NFS Mount Network Booting VirtualBox In order to test modifications to the FreeBSD kernel, I would like to boot a diskless virtual machine. The goal is to be able to quickly test changed I make without needing to recreate the VM each time, or run the installation procedure inside of the VM. You can elect to put the root anywhere you want. For this example, I will be using /home/vm as the root of the virtual machine. There are a few aspects to this: • The filesystem that will be booted. • VirtualBox setup • Virtual machine setup • DHCP server • Host setup #### Filesystem Installation 1. Create the distribution: cd /usr/src make installworld installkernel distribution DESTDIR=/home/vm 2. Create the /conf directory used for diskless boot. See /etc/rc.initdiskless for details. # cd /home/vm/ # mkdir -p conf/base/etc # cd conf/base/etc # cat diskless_remount /etc # cat fstab md /tmp mfs -s=30m,rw 0 0 md /var mfs -s=30m,rw 0 0 # cat md_size 256m #### VirtualBox Setup Create a "host-only" interface with DHCP disabled. To do this: 1. Open up the preferences screen and go the "network" tab. 2. Create a new host-only network (the green plus on the right). 3. Disable the DHCP Server. 4. Select an IP Address in the range your VM will have. 5. Note that the DHCP server will continue to run until killed or the machine rebooted: ## Note that you may kill too much here. Be careful. # pgrep -fl DHCP # pkill -15 DHCP ### Create the Virtual Machine 1. Create a new virtual machine. Make sure to select "FreeBSD - 64 bit". Note that you should create this machine without any disk (and ignore the warning at the end). 2. Open up the virtual machine's settings 3. Select only the "Network" option under boot order (System->Motherboard->Boot Order). Also check "Hardware clock in UTC time." 4. Under network select the Host Only Adapter created before. Under advanced options, change the adapter type to "PCNet-PCI II." Also make a note of the mac address of the virtual machine. It will be needed later. ### DHCP Server For simplicity I opted to use a really simple DHCP server dnsmasq: 1. Install dnsmasq$ make -C /usr/ports/dns/dnsmasq install clean

2. Configure dnsmasq to server as a tftp and DHCP server for the virtual machine interface. Modify /usr/local/etc/dnsmasq.conf (I've bolded the parts that are configuration specific):
interface=vboxnet0
dhcp-range=172.16.100.100,172.16.100.200,48h[1]
#Note that this mac address is the mac address of the vm noted earlier.
dhcp-host=01:02:03:04:05:06,vm-testing,172.16.100.100,set:diskless
dhcp-boot=tag:diskless,/home/vm/boot/pxeboot
dhcp-option=tag:diskless,option:root-path,"/home/vm/"
enable-tftp

3. Add this to /etc/dhclient.conf so that you can reference your virtual machine by name:
precede domain-name-servers 127.0.0.1;

### Host Setup

1. Add the following to /etc/rc.conf:
# Required for VirtualBox
devfs_system_ruleset="system"

# Required for diskless boot
dnsmasq_enable="YES"

# NFS Exports
rpcbind_enable="YES"
nfs_server_enable="YES"
mountd_enable="YES"
mountd_flags="-r"
nfs_client_enable="YES"
weak_mountd_authentication="YES"

2. Add the directory to /etc/exports:
/home/vm -ro -alldirs -network 172.16.0.0 -maproot=root

Now you are all set. Just boot the virtual machine from VirtualBox and watch it go!

1. Note that the 172.16.0.0/20 network is RFC 1918 private address space.

## September 24, 2012

### Justin Dearing

#### Looking to improve my software craftsmanship in Cambridge, England on Saturday 2012-10-20

I’ll get right to the point. My employer needs me to be in London on Friday 2012-10-19, and I decided to stay the weekend. I settled a nice youth hostel in Cambridge. At first I was thinking of touring the museums and university. Then I figured, why not see if I can become a better programmer while I am there?

So first the basics. These days I’m fluent in C#, PowerShell, T-SQL and MongoDB. I work for a large company that produces propeitary closed source software that I cannot talk about or share. However, I do contribute to some open source software, and theres nothing stopping me from working on your projects for the weekend. My full resume is here. Also, I’ve never done this sort of thing before, or even tried pair programming so this is entirely new to me.

I’m looking for someone to propose an interesting project for me to pair with them for a day on. If it’s some closed source project you are working on, then I’d expect some form of compensation. It could be, dinner, single malt scotch, a used iPad, etc. If its open source, then no compensation is necessary (but I’ll never turn down a well earned free lunch). I’d like to see tangiable results from our work in a day. If those tangiable results are simply turning unit tests from red to green, or increasing code coverage, that’s ok . I don’t care if the situation works out that one of us is a better programmer or we are both equal programmers. There is something to be learned on both sides of a teacher/student relationship.
Are you interested? You can reach out to me in the comments, or send me a mention on twitter. My twitter handle is @zippy1981.

## September 06, 2012

### Justin Dearing

#### Why should developers learn and use PowerShell?

Its no secret that I’m a PowerShell true believer. I’m often frustrated why many of my fellow developers don’t fall in love with PowerShell the way I do. However, today, I found this interview by Doug Finke in which he explains why developers should use PowerShell.

BTW, Doug wrote a great book called PowerShell for Developers. Grab a copy today!

## September 04, 2012

### Josef "Jeff" Sipek

#### Cycling

I got myself a bike in early May. It has been a long time since the last time I biked, but Holly and I thought that it would be fun to get bikes and ride around. I am a geek and so I have to do some stats tracking. Data collection is pretty easy with a phone since it has a (mediocre) GPS. Here are my findings about various ways to keep track of the data collected while biking.

#### Map My Ride

Couple of days later, someone happened to post a way to get a JSON encoded latitude, longitude, elevation, and time for every workout. I quickly grabbed the data, and whipped up a small Python script to spit out a minimal GPX file.

#### Custom Software

Initially, I resisted the temptation to write my own custom software to read in the GPS data. When I found out that Map My Ride did not let you export time information, I thought to myself how hard could it be? I had some GPX files I created using My Tracks, so I went and hacked together a quick script which spit out a 100 kB HTML file that used HighCharts to display them. A few observations I made during this were…

1. One has to do something smarter than dumping over 100 kB of data into a HTML file otherwise the browser will have unacceptably slow response times.
2. Having the GPX file is a great first step, but some post processing needs to be done to account for the variable accuracy of GPS. (I haven’t experimented with better GPS receivers.) If you do not, you will invariably end up with holes in you data (I’ve seen several 30-second intervals without any location updates), and worse yet random jumps from place to place when the GPS receiver changes its mind about where you are. This results in spikes in distance traveled between two location updates leading to huge spikes in speed.
3. I should have prototyped my algorithms in R. While running the script and refreshing the page took only a couple of seconds, python doesn’t quite lend itself to the same algorithmic tinkering that R does. (Matlab would have worked too.)

Needless to say, after about 3 hours of playing around, I more or less gave up on the idea of making my own visualization.

#### Endomondo

I first found Endomondo back in January, when I searched the Android Market Place for an application that would record my GPS location over time. Endomondo is a sports tracker. The free app is pretty simple and straight forward. I think the website is less interesting than Map My Ride’s, but that’s to be expected since Map My Ride caters to cycling. Endomondo makes it easier to find what your best kilometer (or mile) is, while Map My Ride has much better elevation graphing and analysis. While Endomondo felt a bit less feature-full, it lets you export your workouts as GPX with all 4 dimensions.

I was just about ready to switch to it fully (instead of importing My Tracks-generated GPX files), when I heard of Strava during bike-to-work week.

#### Strava

I thought I would give Strava a shot in case it had more features than Endomondo. I was surprised. The user interface is very clean, and it definitely caters to cyclists (even though it supports other activities). You can view the elevation, speed, and power of a ride with either distance or time on the x-axis with a single click. Strava also differentiates between time and time spent moving and understands that bikes have mass. Strava also has the concept of a “segment”. A segment is a certain path and everyone that takes that path at any point in time will be put on a leader board for others to see. It is interesting to see how you compare to other people in the area.

The Strava android app had issues with Samsung Galaxy Nexus (after you start recording, when the screen blanks as part of the power saving configuration, the app exits), but within a week or two an app update fixed the issues.

#### Conclusion

My recommendation is either Endomondo or Strava based on what you primarily care about. If you prefer many different sports, I feel that Endomondo will probably work out better for you, while if you are more of a cycling or running fiend Strava is the way to go. Then again, you can sign up for both, and decide which one you like more.

Finally, when I want to just a generic GPS track (e.g., for geo-tagging photos from my camera), I use My Tracks and then export as GPX.

## August 10, 2012

### Justin Dearing

#### Saying Thank You and the Stack Exchange Summer of Love

Recently Joel Spolsky kicked off the Stack Exchange summer of love as a way to be proactive against the rudeness that occurs in all mature online forums. Then Josh Heyer clarified the role of niceness in a Q&A forum. I applaud this effort in general. However, I think they forgot one thing. They should remind people to say thank you.

As a participant on several of the Stack Exchange sites, mostly the original trilogy, I am no Jon Skeet, but I do try to answer questions when possible. When I answer questions, two things really annoy me:

• Those who never upvote or mark an answer as correct
• Those who never ever use the comments for saying thank you, or It worked.

See, I’m a fairly social person, and while I’m not always good at it, I’ve made an effort in recent years to remember to say thank you in emails. On top of that, I like to help people. My reward for answering your question is you saying “that worked for me.” A thank you simply does that better than a green check mark. Don’t get me wrong, I’m all about the upvotes, green check marks, and badges. The gamification is a driver, but please make me feel like I’ve help out a fellow human being to solve a problem.

## July 29, 2012

### Justin Dearing

#### MSIExec and Far Manager Part 1: Command line install and uninstall of FarManager 3

This is part one in a series of blog articles in which I shed light on the internals of MSIs using the example of the MSI for Far Manager 3. It was inspired, amongst other thing, by lessons learned while creating the Far-2 and Far-3 packages for chocolatey. While the idea of writing those packages was so others would not have to learn the dark arts of command line manipulation of MSIs, I though I would write this series for those interested in how the sausage gets made.

I’ve written about my love of the File and Archive Manager before. I’m a command line guy so this utility naturally appeals to me. In addition to being a command line guy, I’m also very much a MSI guy. I always prefer installers to unzipping files somewhere, and I prefer MSI installers to exe based installers such as those made with the Nullsoft Scriptable Installer System. On the surface it might seem like these two loves would be in conflict. After all, when you run an MSI you get a GUI. However, there is a command line executable for installing MSIs, called MSIExec.

So using the example of  a recent 64 bit nightly build of Far Manager 3.0, C:userszippyDownloadsFar30b2746.x64.20120624.msi, lets explore how we can install and uninstall an MSI from the command like.

## Getting some help with msiexec /?

MSIExec comes with built-in help. To see it, type msiexec /? from the command line.  Strangely, it will display that help in a window instead of the console. This is similar to the behavior of ntbackup.

## Let’s install Far Manager!

Looking through the command line install options, /i is the switch to install an MSI. To have no GUI feedback I can use /quiet. If I want just a status bar I can use /passive. So  to install far automatically, I can use the following command:

This will give you a default install of Far Manager 3.0. However, if you use Far, you are by definition a power user, and you probably don’t pick the default install options. It is possible to customize what features to install during a command line install. I’ll explain your options for doing that in part 2.

## Time to uninstall it

I can only see one reason to want to uninstall Far Manager 3.0. That would be of course when Far Manager 4.o comes out! Since Far 3.0 is still under development, that event is probably at least 2 years away. However, we want to be prepared for that day. Also, Far is just the example we are using. There are other programs that are installed with an MSI that deserve to be uninstalled.

If you examine the msiexec /? documentation, then you will see that the uninstall syntax is msiexec </uninstall | /x> <Product.msi | ProductCode>. The /uninstall and /x switches are identical, but I prefer /x because its terser. ProductCode is a GUID that I’ll explain how to get later. For now, lets use the path of the original MSI. So in our example the command is.

That’s fairly simple. However, you don’t always have the MSI available for something you want to uninstall.

## Finding the Product Code

The ProductCode is a GUID. The method for finding it is not obvious. I’ve resorted to using WMI, but there are probably other methods. Specifically, I use the Win32_Product class. The best way to do this from the command line is  with the PowerShell cmdlet  Get-WmiObject. The command to search for all instances of Far Manager installed via MSI on your system is:

Get-WmiObject Win32_Product -Filter ‘Name LIKE "Far Manager %"’

This returns the following on my machine:

Name              : Far Manager 2 x64&lt;br /&gt;<br />
Vendor            : Eugene Roshal &amp;amp; Far Group&lt;br /&gt;<br />
Version           : 2.0.1807&lt;br /&gt;<br />
Caption           : Far Manager 2 x64&lt;/p&gt;<br />
&lt;p&gt;IdentifyingNumber : {B0911B7C-968A-4448-BAF2-0FF2DA34B805}&lt;br /&gt;<br />
Name              : Far Manager 3 x64&lt;br /&gt;<br />
Vendor            : Eugene Roshal &amp;amp; Far Group&lt;br /&gt;<br />
Version           : 3.0.2779&lt;br /&gt;<br />
Caption           : Far Manager 3 x64

As you can see, I have versions 2.0 and 3.0 of Far installed. I could uninstall Far 2.0 with the command msiexec /x {143F0C11-D9F3-4F1E-9037-67BBFDD379AD} /passive. If you want to get fancy and uninstall both at the same time, you can with the help of the Start-Process cmdlet.

Get-WmiObject Win32_Product -Filter ‘Name LIKE "Far Manager %"’ | %{ start msiexec ‘/x’,$($_.IdentifyingNumber),’/passive’  -Wait }

Special thanks to Trevor Sullivan (blog|twitter), for helping me out on a Sunday with the Start-Process syntax.

I’ve barely scratched the surface of what you can do with the MSIExec. However, its a good start. In Part 2 I’ll discuss how to customize the install.

## July 25, 2012

### Josef "Jeff" Sipek

#### Saint Paul, Minnesota

Holly and I went to Minnesota four weeks ago. We stayed at Crowne Plaza in downtown Saint Paul and had enough free time to explore the city and its surroundings a bit. The weather was great, so the photos ended up pretty good looking. You can jump to the whole photo gallery or just look at the select few photos below.

The first day of exploring was pretty brief. We spent about two hours total exploring the other side of the Mississippi river. We found a place with a nice view of the whole downtown Saint Paul. Aside from the hotel from across the Mississippi river

and a bird of some sort (I have no idea what kind of bird this is) that was enjoying the updraft from the breeze hitting the river bank and turning up the steep hill side

I got enough shots to make a panorama of downtown. It is about 270 degree field of view! (I have a 14MB JPEG for those wishing to see all the details.)

On the way back to the hotel, we saw this old (and steep) stair case:

Here’s a shot of the bridge up close. Did I mention that the weather was great?

The next day, we didn’t explore since we didn’t have much time. However in the evening the  mayflies took over the skies. It was amazing and annoying at the same time. The next morning, we went for breakfast and saw dead mayflies everywhere.

After breakfast we went on to explore the city a bit more. We immediately noticed that the whole city seemed deserted. Businesses just didn’t open on Sundays, after an hour of walking around we ran into maybe 5 people total.

Eventually, we asked a security guard what was going on. He said that downtown Saint Paul has enough government offices that the whole city more or less lives Monday to Friday, 9am to 5pm.

Eventually, we made it to the capitol hill — the capitol is on a hill. Again, I couldn’t resist the temptation to make a panorama. This time it is only 180 degrees wide. (17MB JPEG)

Then I climbed up the stairs, and took enough for a third (and final) panorama. Again, about 180 degrees wide. (9MB JPEG)

Afterwards, we mosied on toward the Saint Paul Cathedral. (Our last stop before going to the hotel to grab the bags and heading to the airport.)

There is a bunch of photos in the gallery that I didn’t include this post. That concludes this edition of “Travels with Jeff & Holly.”

## July 17, 2012

### dorgan

#### Changing Jobs

After working for NetBlazon for 3+ years come Monday I will be starting work with a new employer.  I will be working for Walt Disney Park & Resorts Online as a Web Developer on analytics team.

Leaving NetBlazon was a very hard decision but was something that needed to happen because now that I have a FULL family, it includes twin girls now, I need to think about my future and the future of my family.  This includes a retirement for myself, but more importantly a way to provide for my family if I should be injured or out of work for an extended period of time (Short/Long Term Disability).

I have learned many things the past 3 years at NetBlazon and I am very thankful to Pete & Susan (the owners) for giving me the opportunity to work for their company.  We have built some great solutions (AccurateTax, FeedExact & Click-2-Customer). And I look forward to seeing what they produce in the future.

## June 23, 2012

### Justin Dearing

#### Firing with dignity: In defense of the immediate escort

Earlier this week, Chris Dixon (blog|twitter) wrote about firing from a startup. I’ve never worked in a start-up. I’ve also never ran a company or fired anyone. However, I have been fired from an established mature company. In the comments I shared my experiences on being fired and how I felt that being escorted out the door and not being allowed to say goodbye to my coworkers actually added dignity to the process. Another commenter, Phillip Rhodes disagreed. I wish to expand upon my thoughts on the matter here.

This is the first of a two part series. In this part I will talk about my experiences at an organization where coworkers of mine were fired. In the second I will talk about the time I was fired.

One of my first jobs was at a medium sized local ISP. It was not a perfect company by any means, but I learned a lot there, and my time there shaped my opinions about many things including my belief that it is in the fired person’s best interest to be immediately escorted out the door.

Several people were let go during my tenure at this company. There was one period of layoffs due to a lost client. However, most were firings due to performance and there was only one I’d argue as being an outright mistake. I believe that some level of escorting was involved in all of them, but I only witnessed the immediate aftermath of one.

The one I witnessed was my boss. I had at this point risen to the rank of Unix/Midrange administrator. Two senior unix admins gave notice recently, and two more were brought on. One was to be boss of myself and the other new hire. It quickly became apparent that he was not a good fit. At some point I was commanded (he actually told the help desk operator that relayed this order, “This is my command!”) to document the entire unix infrastructure, and declared I’d be the only one on call until this was completed. I did not bother to object to this because I knew his end was near. The inhouse recruiter’s whiteboard had UNIX ADMIN written real big on it.

One day shortly thereafter, the CTO called me and the other junior unix admin into his office. I put on a solem looking face as I correctly guessed we were about to be told to change all the passwords. We divided up the systems between the three of us. Then I walked back with the other junior to his cube. I needed to break the solemnity at this point so I said something to the effect of, “well thats the biggest non-suprise of the year!” At that point, my now ex-boss walked up to us with a VP escort and proclaimed, “guys I am fired.” He then walked to his desk, and gathered his things with much ado.

To this day, I’ve never had a more awkward moment in my professional career then having to interact with my ex boss who was at that point a dead man walking. I would never describe that particular person as ever conducting himself with dignity, but this was a new low for him.

In this particular case, an attempt was made to immediately escort the fired person out. Perhaps more could be done. He could be fired over lunch. Honestly, this particular person was prone to emotional outbursts. I don’t think a public setting would encourage the man to restrain himself. The only way to give the person his dignity would be to deliver the news in an abandoned building. Finally, regardless of your general policy on the removal of fired persons, this particular case warranted immediate removal for the safety of everyone in the buildings.

That had been the only firing I had witnessed at that company. However, I was involved in the password reset for another one. This is incidentally the firing I disagreed with. This is a person who’s hand I would have shook had I encountered him immediately upon being given the news, when he was a “dead man walking.”

Here’s the thing, I did get to shake his hand. We had lunch several weeks later after he started his new job. He contacted a few coworkers, and we all spread the word. He got to interact with us on his terms, with the pride and dignity of someone that recently started a new job.

Now these experiences are all vicarious to me, and I know not what lies in another mans heart, so I don’t know how these people really took  the news. However, these were formative vicarious experiences to me. As such, they shaped how I dealt with my own termination, which is the subject of the next installment in this series.

## June 22, 2012

### Justin Dearing

#### Firing with dignity: My own walk of shame

This is the second and final part of my series on firing. It is an apologia for the practice of immediate removal with escort of the terminated employee. In part one I talked about two firing of coworkers of mine that were formative experiences for me. In this part, I will talk about my experience of being fired and escorted out immediately.

At this point in my career I was a .NET developer. I was a rising star in my company, until I made a mistake that lead to a fall from grace. Officially I was laid off. Two other developers from my office were laid off that day, and I’m told several from another office. I don’t wish to speculate how long I would have lasted had a round of layoffs not happened. That’s not the point. Through a series of factors, a decision was made by my employers to terminate my employment. That decision was executed in a manner that took into account my dignity and I will describe said execution here and provide commentary.

My CEO asked me to step into the conference room. I followed him and closed the door. He said my boss will be joining us and reopened the door. My boss entered and the CEO closed the door. He began a solemn speech and long winded speech in which I was told I was no longer going to be part of the company. I was told I could call him next week if I wanted to talk, and that the head of HR would contact me with my severance package and I could provide said person with an address to ship my stuff. I was asked to hand over my access card, and told we would go to my desk, where I could grab any essential items such as my keys or my wallet, and then we would walk out the door.

I made no verbal utterance during the process. I was in a bit of shock. I took my access card out of the back of my George Costanza wallet, the same location in that wallet every access card I’ve ever been issued by an employer has been stored, and where my current access card is stored. I then walked back to my desk.

At my desk packed some things into my book bag. I realized I should not bring all the books on my desk. I debated taking my IBM Model M keyboard, an important talisman of mine. I realized just this once I should avoid my pack-rat tendencies and let it be shipped to me.

It was almost lunch. I called a friend who lived nearby, walked to his apartment and dropped the news. He bought me lunch. I walked to the local library and read a book for several hours. I called a recruiter and and old boss I consider a mentor. After 5pm I met with the coworker who got me into the company. We drank. I drank a lot that night, but not beyond a normal Friday nights drinking at that point in my life.

Around 6:30am Saturday morning my friend and now former co-worker dropped me off at my house. I gave him my company laptop which the CEO forgot to ask for, and informed HR via email it was in his possession.

Sunday night I told my fiancée. She took the news harder than I did. However, she was there to be my harbor for this storm. Luckily things moved quickly so I didn’t have to lean on her for very long, and the wedding didn’t need to be rescheduled.

I never called my boss. I didn’t need closure. I knew what my sin was. Two Fridays later at 11 pm I started a 30 day contract that lasted almost a year. Things got progressively better from there.

Thats my story. What do I feel my CEO did wrong? There was no need to harp on the fact that I closed the conference room door. Also, I wish he let me open the door to the elevator bank myself when I walked out the door after getting my things. However, that was my boss exhibiting idiosyncrasies that have always jarred with me.  Had I been a weaker person, had I perhaps cried, or yelled or acted out in a way I would later regret, he would have minimized those actions, minimized my regret. In other words he would have accted in compassion and taken into account my dignity. The extra level of defiance I would have needed to exert if I really wanted to reenact the “say good night to the bad guy” scene from Scarface ensured that if I did I could rest assure that I really wanted to do it.

So thats my one complaint about the execution of the termination. Everything else about the execution was perfect. I was allowed privacy. I made no verbal utterances, or attempted any form of defiance, so I can’t grade their counter reaction. However, my boss and CEO were both individually capable of physically restraining me if I reacted with violence, and they stood between me and the door. Nothing ill was said of me by either party, and my boss was one of the people that connected me with headhunters.

I’ve corresponded with several of my coworkers after I was back on my feet again. I’ve done lunch and drinks with some. I don’t feel I was denied the ability to say goodbye. Had I been allowed to be a “dead man walking” for an extended period of time that day, and to say goodbye to everyone that day, I would have to shake their hands as a lesser man. For better or for worse I define myself as a programmer, and therefore my job is closely tied to my identity. When I was fired, part of my identity was violently torn from me. I had the weekend to come to terms with being “Justin, the between jobs developer.”