Blog. And sometimes just whinging.

Comcast cares, but not about fuel, time or money

// November 7th, 2012 // Comments Off on Comcast cares, but not about fuel, time or money // Rants, tech

Update 2: Nope. Now they’re saying she’s ‘lost’ TWO modems and has to pay for both of them. So, basically, if they can’t properly document the whereabouts of their equipment, it must be your fault.

High-speed internet suggestions in the Atlanta area?

Update: The @ComcastCares Twitter people are offering to look into this for my friend. Presumably pre-truck roll. Before it was just (paraphrased) ‘Sorry. Let us know if you have any questions after the truck comes to your house [for no apparent reason and then leaves again none the wiser].’

BTW, I like the whole @ComcastCares on Twitter thing. It’s actually helped me at least once with a d.u.m.b. situation. But don’t fool yourself that this was some brilliant idea on Comcast’s part. A guy who used to work there named Frank Eliason took it upon himself to start it and then showed the company that it worked. To their credit Comcast let him create a small team to do this sort of online triage, but they were, and presumably still are, just a small hand trying to steer a large gun away from continually shooting the foot of the Comcast giant.

Friends may have heard me whinge about something like this before in person. OK, I realize that doesn’t narrow it down. But bear with me. I’ve never jotted down my thoughts online about this issue and it really bugs me. No, this is not election-related. I’m not up in arms about Honey Boo Boo’s poll numbers or who names a child ‘Saxby Chambliss’ or some such shite. It’s Comcast.

Comcast just seems to want to waste not just your time, but gas and man-hours on trivialities. And they just don’t seem to either A) give two shits about it or B) realize that they’re just wasting gas (and my money) and ERMAGERD STOP BERNIN ALL TEH GASS! This seems to be institutionalized waste, too. She says from her First World perch on her couch in front of her giant-ass TV plugged into #*&^%@^& Comcast cable.

Roll a truck. Roll a truck. Roll a truck. Is there a union inside Comcast that we don’t know about that says “When in slightest doubt, roll a truck. If a tree falls in the woods, roll a freakin’ truck!” And we, undoubtedly, pay for this waste via our exorbitant cable bills. I don’t know about you, but my cable bill is ludicrous for what I get. OK, back up. Actually, I do know about several of you, including the friend who just this week asked me how she can best get rid of Comcast because she feels like she’s being fleeced like a whale in Vegas. But for me it’s a monopoly. I literally have no choice if I want cable TV (and by cable I mean HBO, AMC and ‘all that other stuff’). It’s Comcast or it’s nothing for me.

So what prompted me to write this screed and in the sentence after this one compare Comcast to the dystopian bass-ackwards grinding dumb machine that is The Party in Orwell’s 1984? Well, a friend is currently dealing with some Comcast INGSOC. You see, she’s just realized that The Party… errrr, I mean Comcast… or Xfinity… or whatever their name is… is still charging her for her old cable modem. She asked for her a service disconnect when she moved, a tech came and did so and took the equipment away as is their duty. However the cable modem, which is now marked as inactive on her account, is still on her monthly bill. The undisputed part is that it’s clearly not the cable modem that’s currently in use, there’s no question of that. Each unit has a unique ID called a MAC address that Comcast can easily view. So they know what unit is online. If for no other reason than that once upon a time another truck came and installed a different cable modem at her current residence on a completely different order which they have in their system.

To recap:
1) A disconnect order is somewhere on file at Comcast that her services was disconnected and the tech closed that order, but with no notation that anything was wrong and the modem was not found. And since equipment retrieval is S.O.P. for the disconnection order, the assumption has to be that the device was picked up. Otherwise, the tech would/should have noted that the modem was missing.

2) The service at her current residence is using a Comcast modem that they are fully aware is not the modem in question and was connected on a completely different order, also somewhere on file.

3) The obvious conclusion is that the old modem was picked up along with the other equipment during the disconnect order and someone dropped the ball in removing it from her bill. Obvious to the the Proles, that is.

4) This means they need to roll a truck. Because for every little thing that a billing person is not capable of typing into CES… a truck must roll, a tech must be paid and gas must be burned and… Wait. WTF?

Yes. To sort this out, Comcast’s Ministry of Truth is not looking at the previous disconnect order, using logic to extrapolate the clear error on their part (FYI, Comcast, the tech works for you – their screwups are your screwups) and saying, “Ah. The only logical conclusion is that we screwed up. A tech came out on xx/xx/xx when you moved and disconnected your service and, if the modem had been missing, the tech should have noted this. They did not. Sorry for the confusion, Miss, here’s your refund.”

No, instead of that, their brand of DoubleThink, let’s call it XfinityThink, means they need to roll a truck to have a tech… look at a thing that is not there and declare “It’s not there”.

Yes, that was a long run-on sentence above, but you read it correctly. A tech will show up, stare at the cable modem that they already know is not the one in question, not see it transmogrify into a completely different cable modem that they have no way of knowing where it is and say “OK.” Then get back in a truck and burn some more gas.


Instead of a human with decision-making abilities looking at this situation and saying “That’s a complete waste of time, money and gas, old bean. And it is pointless as, for all we know, even if she had the modem, which seems logically unlikely and would still be our fault for not properly maintaining record of, she could have delivered it in offering to Cthulhu for shinier hair and whiter whites, for all we know. A tech going to her house is not going to change anything other than our gas consumption. And waste her time.”

I mean, seriously, is this tech going to show up and ask to search her house for the missing modem like some TV Cable Cop? CSI: Xfinity? Today’s episode: “Don’t go to work, we need to come look at a modem that’s not there.” A modem that’s probably an old DOCSIS 2 unit anyway that they would just toss in a landfill. But if they can continue to charge you for it — but waste twice the value in gas and tech time — hey, whatever. No. That would be logical. So that’s not the plan. Bad is good, brother. Smart is stupid.

A thing that is Not There is more Not There once we’ve not seen it being Not There with our eyes.

Personally, I can think of at least four instances in recent years where I’ve had a truck sent to my house for completely trivial things that someone could have done over the phone. And every time I wondered “How much did that just cost? And don’t they realize what a waste of money it is? And… how much am I paying for it, ultimately?”

Or do virtual monopolies care about such things like burning crap-tons of petrol? My friends who live in the area of town where there’s actual competition and therefore lower prices can stop laughing and shut the hell up. Or tell me where I need to move so I can get RCN.

Comcast… you are a big stupid beast.

New T-Shirts

// September 15th, 2012 // Comments Off on New T-Shirts // Represent

I’ve just put up two new t-shirts at RedBubble declaring my love of old video games. (click image)

UPDATE: I guess Atari didn’t like the first one and I had to take it down…

I also designed a shirt for Skeptical Robot that’s up for pre-order (and then they’re gone!). (click image)

Surly tough

// August 9th, 2012 // Comments Off on Surly tough // Represent

Granted, it was forged in a… well, a forge. OK, technically a kiln.

But this is still pretty cool. MY friend Amy Davis Roth ( makes awesome ceramic jewelry, some of which I’ve bought and received as presents over the years. Well, someone* recently had a house fire and look what survived.

Click for larger image.

Yes, the kiln probably burns hotter than a house fire, but it’s a different sort of thermal event. So it’s still amazing to me that this thing came out like it did. You can buy your own from Amy’s website Surly-Ramics.

*I don’t know who they are, I just have the pic and the story.

A tribute to Ray Bradbury

// June 7th, 2012 // Comments Off on A tribute to Ray Bradbury // Represent

Ray died on June 6th, 2012. You can read my tribute to him at or

Happy Robonukkah!

// December 24th, 2011 // Comments Off on Happy Robonukkah! // Represent, Uncategorized

TWE and Maddie wish you a very happy Robonukkah.

Walking Eye, Hank! They’re all the same.

// November 27th, 2011 // Comments Off on Walking Eye, Hank! They’re all the same. // tech

I’ve been working on a robot for a while. Well, a specific robot. I’ve tinkered with some others prior to and along the way. But for this particular bot, I started fiddling with servos and controllers for an arm last winter and since then I’ve bought a bunch of micro-controllers and itsy bitsy computers to fiddle with too. The results of all that fiddling have been sort of percolating in my head and has recently, in long bursts of work, been spat out into this, the Tiny Walking Eye. I never intended to do a pre-design, per se, and I’ve let all the ideas sort of clump together so that I knew roughly what I wanted to build, just not exactly how I’d build it. I built an eye. Then an arm. Then… I built all that you see below in a couple of long, late nights. Given that I ‘made it up as I went’, I’m fairly pleased with the aesthetics of, so far, it as well. Nobody wants an ugly robot.

TWE robot as of Now. 27th 2011

OK, it’s not actually tiny, it’s about 22 inches high at the top of the video camera. And it doesn’t walk, it has 4 drive wheels in a differential (‘tank’) steering configuration. And the eye is a camera. But my friend Christopher (who I also do a podcast with) and I were making Venture Brothers jokes and I got fixated on “Giant Walking Eye” and so TWE was christened.

The chassis is a Dagu 4WD Wild Thumper bot chassis (ordered from Pololu) which I’ve extended a few inches. In hindsight, the 6WD chassis would have been better. Maybe for TWE2. The shell is foam PVC sheeting which is easy to cut for someone who lives in an apartment and doesn’t want to annoy her neighbors (more than she does already). Also, it’s very light and I need to keep everything that sits on the chassis under about 10 pounds. The chassis undercarriage has a power distro box with motor controllers, an emergency cut-off switch and two 7.2v batteries in parallel for the motors.

There’s a Mars Rover-esque platform that sits on a revolving turret on the chassis. On top of that is the brain box which also houses the front-facing arm. The arm is also made from .157 PVC and is powered by tandem servos (one reversed so that they lift in concert) for shoulder and elbow. The two large-scale shoulder servos are Hi-Tecs and the two in the elbow are some crazy Chinese servos I found on Amazon which have huge amounts of torque and metal gearing. The elbow will do most of the work by itself and the should is only needed when the arm needs to be extended. The wrist ends with a gripper which I bought from Parallax or RobotShop or somewhere. I’ll be using Phidgets or Pololu controllers for the servos (depending on which ‘network’ I end up using – more below).

The white PVC and angled cuts give it a sort of 70s sensibility and I’m OK with that. Plus, you should note that there’s nothing on the side yet and there will be. There’s only one ultrasound range finder right now, but I’ll be putting 5 more on (one on each angled corner and on in the rear) as well as some other goodies on the side. And in the rear will be a little boot to put the secondary batteries in (two 6v totaling ~ 9Ah for the processors and controllers).

Up top there’s a camera which pans and tilts and immediately behind that is a 7″ LCD display. The display will be hooked into the core micro-controller (probably a Parallax Propellor board) which may or may not also be hooked to an ultra-small PC (a Gumstix Fire or a Genesi Efica – both of which I’m playing with). This all depends on how I want the robot to be controlled. Without the PC, I’d be making it completely autonomous (maybe an Xbee for remote control or logging). With the PC, I’d be able to store and execute more complex code and also tie back via Wi-Fi to another computer where I could assume manual control if desired. I haven’t decided. Maybe I’ll try to make it do both. It all comes down to software and batteries.

Additionally, I need to decide how this will all be connected. Depending on which controllers and computers end up in side, it will either be primarily USB, Ethernet or a mixture of I²C and USB. I’ve mocked up both and there are benefits to each. We’ll see. USB is winning at the moment given that all the controllers already inside have USB ports and everything else could be wired to the micro-controller (which also has USB).

Then, I need to figure out how it recharges itself. That’ll involve building a charging circuit for the various battery systems and a station it can find on its own (probably using RFID triangulation).

ANYway… that’s the state of TWE. In case you were wondering. Which you totally were. Hope you enjoyed. Cuz… everybody needs a robot. For… ‘reconnaissance applications’.

Sorry I didn’t update this more as TWE progressed. I moved to a real shop at the Artisan’s Asylum and worked on TWE in fits and bursts. I put in control systems for the drivetrain, arm and sensors. Played around with ROS and OpenQbo and whatnot. All in all, TWE was a fair success (there were things I could have done much better, admittedly) and, admit it, he’s adorable.

But then, alas, other projects got in the way and TWE ended up on a shelf. I eventually pulled out and loaned the drivetrain to Brandon from Rascal Micro for a while to do some demos on and I pulled a few things out here and there for other projects.

BUT! Don’t be sad. This week I took the drivetrain out and reprogrammed the controllers. I removed the arm from what’s left of TWE’s chassis (that arm turned out good, dude. Seriously.), the dual cameras, and am tinkering with a different idea now. It’s already got a name of course. Because I have to name everything. SHANE — I was running the drive train up and down the aisle at the Asylum using an RC controller and, joking to a fellow inmate, I called ‘Come back, Shane!’ And that’s how names are gotten. ;)

Anywho. TWE has left us, long live TWE. But something new to come. :)

What Am I Ticked Off About re: Mozilla/Firefox?

// November 17th, 2011 // 2 Comments » // Rants, tech

What Am I Pissed Off About re: Mozilla/Firefox?

EDIT: I’m very grateful to Mozilla for listening and eventually creating the ESR track for both Firefox and Thunderbird. This ( ) effectively fixes all the below for us.

EDIT 12/2/11: heh… Looks like I’m not alone in my thoughts. Comments at Slashdot on Firefox losing market share [image].

Mozilla is fighting an invisible battle against Google Chrome. They’ve implemented a ‘me too’ rapid release cycle for Firefox (and therefore also Thunderbird since they have [again artificially] tied their cycles together) in answer to Google’s rapid release cycle.

And the poop started hitting the fan. Not only was the public confused (“OMG! My browser’s really old! I only have 3.6 and they’re already up to 6! Was I asleep for a year?”) but enterprise IT folks were not amused. We can’t afford to have a browser we just deployed be declared un-supported mere weeks later. Similar remarks here:

Yes, there is a working group that was put together after Mozilla finally admitted that enterprise IT had a valid point ( )… in August 2011 after the release of version 6… two more major releases have come out since then. But right now there’s just an ESR proposal and… that’s where we stand. In the meantime, time continues to go forward at the same pace and we’re still dealing with actually using the browser. We esentially had ESR, then Mozilla took it away to go tilt at a windmill called Chrome. Now we wait while people talk about ESR… or we don’t wait and we move on.

We want to love you, Firefox! Why won’t you let us love you!??

The browser we’d fought for, the browser that finally took away share from IE, the browser that worked across platform and became popular enough for sites to start to say “OK, we support Firefox too.” That browser’s maker has seemingly turned into a parody of Microsoft trying to keep up with [Apple/Google/etc. and yes, even Mozilla] when they’d clumsily announces after the fact “Oh, yeah, we’re gonna do that too!” Now I have users who used to complain maybe about a website complaining about the browser.

So now, no more stable release followed by a cycle of improvements and bug fixes (all the while being supported because the ordinal number up front hasn’t changed and won’t change until the next release goes stable and comes out of beta). Now it’s release, release, release and pray to bob the bug fixed in 5 doesn’t show up again in the ‘all new super hot off the press’ 8.

And, most importantly, this all loses sight of how the browser wars ended. They ended with Firefox the moral and spiritual victor on one solid principal: Build a better browser and people will use it. Goliath IE was slain (or at leads severely maimed and forced to also get better) by one simple principal: Build a better browser and people will use it. Did I mention “Build a better browser and people will use it”? Not “OMGZ googlez has bilt a browzer and they’s gonna take all our search eyeballs moneys! Run around in circles!!!”

Now Firefox is so effing scared that they’ll lose that sweet Google search eyeballs cash that they’re all but making it a self-fulfilling prophecy in their panic. ( ) Why? Because Google planted that idea in their head when they released Chrome and now Mozilla’s management can’t see past it. It’s like a bug in their brain that’s making them crazy. (“This is Ceit Alpha V!”) They are so fixated on the forest they don’t see the trees catching fire. But the truth is that Google will keep paying out that cash as long as Firefox brings in eyeballs. That is, unless Mozilla gets so panic’d they start acting like headless chickens and _manage to drive all its customers away_!

Which is exactly what I think might be happening. Hell, I’M using Chrome now because I just can’t take it any more (and Safari is in the crapper too as far as I’m concerned – so I don’t have much choice… in a world that used to be all about choice).

Now, my team is forced to sit down and talk about “What browser do we support officially if/when Firefox doesn’t get back on track. Also, we’re screwed email client-wise if Thunderbird ends up under the bus for no good reason.” My server guy… my poor staunch advocate for open source and non-big brothery software is forced to admit that we might have to consider Chrome! He wants to love you, Firefox! Hell, he does love you. But his love is wavering. So what exactly is wrong? Sheesh, where to begin. And, honestly, I’ll forget something. It’s all become a blurry laundry list of complaints from minor annoyances to show-stopping bugs (Stack space errors?? Really?? In 2011?). But, quickly and anecdotally, go google this:

Those people? They’re not switching to Chrome because Chrome is sexy or amazing… largely you’ll see them saying that they are leaving Firefox because of Firefox’s problems or short-comings, not Chrome’s features. OK, on to my gripes as an enterprise (education, actually, but we work the same and expect the same) IT shop.

* Instability. We’ve gone from a stable Firefox (sure, it had its quirks, but stable enough for us to say “we support Firefox” and be able to stand by it) to having to say “well, if you’re having problems in Firefox, you may have to use Safari/IE for that”. And then bracing for the next release 6 weeks later. (In all honesty, we’re just leaving most people on 3.6.x)

* Page rendering and slowness. This has forced us to downgrade some users who just can’t deal with it to 3.6.x And we’re clearly not alone:
And, tellingly, you’ll still find a link to 3.6.24 on Mozilla’s download site. Even they, tacitly admit there’s still a reason for it to be there:

* Let’s talk about slowness. How can it be that Chrome got faster and Firefox got slower? ZDnet sure thinks so. Compare these two Kraken scores:

You’re killing yourself, Mozilla. No excuses, no waffling. You. Are. Killing. Yourself.

* New weirdness depending on if you’re on 6 or 7 or 8. Profiles being trashed, bookmarks reverting or disappearing… What works in 7 might not work in 8. What was fixed in 7 from 6 seems to once again affect 8. And boy is it RAM hungry. But it was i/o hungry before, so that’s probably a step forward for users with networked hime directories… Submit crash report, submit crash report, submit crash report.

* The artificial rapid release cycle creating browser instability is also unnecessarily affecting Thunderbird. For us, Thunderbird 8 is unusable. It _simply does not work for some users_. Add an IMAP account with lots of folders and mail and it crashes at startup. Get someone with less mail and it’s fine (but Lightning may or may not work). Submit crash report, submit crash report, submit crash report.

* The rapid release cycle also tends to break plugin/add-ons, often for no other reason than the fact that this version, which isn’t much different, starts with a different number. We even saw Thunderbird run into this day of release when we rushed to test it. In my case, instead of bringing Lightning with it, it disabled the already-installed lightning add-on and then refused to upgrade (Lightning will be upgraded on next restart -> restart -> Lightning will be upgraded on next restart -> removed lightning manually -> install lightning -> Lightning is not compatible with this version (WTF?) -> clear everything out -> install, go to add-ons, aha! Lightning link in featured add-ons -> install Lightning -> Lightning will be installed on next restart -> restart Lightning will be upgraded on next restart… give up.) That’s… crazy. This is Mozilla we’re talking about…

Dammit… we were pinning our hopes on integrating Lightning into our environment to stem the tide of requests for Outlook for those who just wanted calendaring of some sort. Now we have a 1.0 release of Lightning for a version of Thunderbird we can’t even deploy. ARGH! Because of Firefox chasing the Chrome around like a big dumb puppy chasing a car. (“It must want to eat my food! GRR! Chase!”)

I think Mozilla has lost their minds. Please. Please. Go find your minds and put them back in before you lose all that you’ve worked and fought so hard for (and we’ve supported so strongly) because you got a little scared by some actual competition. This coming from someone who wants you to succeed. Who’s begging you to succeed. I’m your fan. Your cheerleader. And now I’m about to break up with you because… you won’t let me love you!

Additional reading from way back at version 5 (oh, wait, that wasn’t that long ago…)

Getting to your iCloud calendar from iCal 4 (OSX10.6) or a CalDAV client

// October 13th, 2011 // 67 Comments » // Rambling, tech

UPDATED 10/15/11 with new instructions!

I work in an environment where all the machines are tied to a single sign-on system and all the users, be they Mac, PC or Linux, have their home directories mounted from a server at login. Right now, OSX Lion won’t work in that environment, so all our Macs are running 10.6 or 10.5.8.

But what if I want to use my iCloud calendar from work via iCal (or another CalDAV capable client**)? It’s pretty damned easy, actually, I’m happy to say.

Maybe this is published somewhere, maybe not. But I figure a couple of my peeps might benefit from me posting this up. So here goes.

  1. Get your calendar set up and up to date in iCloud first. Don’t monkey with doing that after the fact.
  2. It just got easier. Skip to step 10 and ignore the steps below that says [SKIP]
  3. [SKIP] Open in a web browser and go to your calendars. Click on the circular ‘wireless’ icon to the right of the name of the calendar you want to use. The calendar you want to use must be shared.
  4. [SKIP] Note the name of the server right after webcal:// (example:
  5. Open iCal 3. (I’ll be referring to iCal from here on, I can’t say for sure how other CalDAV clients will respond).
  6. In iCal, go to Preferences -> Accounts and click the add account button (+)
  7. Select CalDAV as the account type.
  8. Enter your iCloud username (for instance, and password
  9. [SKIP] For server address you need to slightly modify that server name you jotted down in step 3
    If the server was, you would replace www with caldav and enter
  10. For the server address simply enter “” (I don’t know when this started working, but it does.)
  11. Click create. If presented with a choice of two possible servers, choose the one that says, not — IF YOU GET AN ACCESS NOT PERMITTED ERROR then you’ll need to use the greyed out instructions instead.
  12. Live large. Your now have your iCloud calendar and reminders in iCal. You might want to change how it refreshes, if you’re like me and want control over that. Push may not work as well in iCal 3. Otherwise, it’s a full CalDAV implementation; add, delete, modify, etc.

** Update: I haven’t been able to get it working in Lightning/Sunbird yet. But it’s most likely a matter of forming the URI correctly. It should be somehting along the lines of:[unique ID]/principal/
or some variation thereof. I’ll try to work on this more tomorrow.

Update 2: It appears they’re also using CardDAV for contacts (hooray for standards!). The path for that would start[unique ID]/carddavhome (Thanks MacRumors forums!)
As of 10/15 6:30pm EDT I have NOT been able to get this working in Address Book 5. If you want to take a stab at it, I do know that Address Book 6 uses a URI like:

https://[username][unique ID#]/carddavhome/card/[long srting].vcf
(The %40being necessary as you can’t have two @ in there but need to include an email address as a username.)

Update 3: So they’re not using a SRV record to do it as far as I can tell (but they are using Akamai so there’s at least one layer of abstraction). Next…

Yours in nerdery,


The Principle of Least Privilege – A Failure in MA

// May 18th, 2011 // Comments Off on The Principle of Least Privilege – A Failure in MA // Rants

[cross-posted to my blog at Berkman/Harvard Law Weblogs]

Disclaimer: I am not a lawyer, nor do my opinions represent that of Harvard Physics, Harvard Law or Harvard University. What I am is a computing professional and technologist. A sometimes outraged one. As a result, some of what follows may be a bit snide. I can’t apologize just yet for that. Past the outrage, I’m hoping that something good will come from this incident… although I rather doubt it.

The Incident:
On April 20th, 2011 around 1,500 computers in the Massachusetts labor department’s Departments of Unemployment Assistance (DUA) and Career Services (DCS) were found to be infected with a [allegedly] new variant of a well-known Windows worm (not a virus as has been reported) called W32.Qakbot. From some prior date — they say April 19th, but I don’t find the idea that they know when the initial infection occurred convincing given other facts — until around May 13th (or May 16th, according to another report), information entered or accessed on these machines may have been intercepted by the worm for transmission to an unknown recipient.

The Response:
The Executive Office of Labor and Workforce Development reported this incident on May 17th. That’s 28 days until they notified the public or state officials. Call it four weeks, call it nearly a month, but either way it’s too long and clearly at odds with state law which requires that any such break-in be reported to the Attorney General’s office “as soon as practicable and without unreasonable delay”. There is absolutely no reason this could not have been reported sooner… except, perhaps, incompetence and/or fear. In their official statement it’s claimed that “all possible actions have been taken to minimize the impact to the Commonwealth’s constituents”, but this is clearly in error as “all possible actions” would have included notifying the AG immediately.

And I’m afraid I have to take the Boston Globe to task too. In its report on the incident it said:

“The potential impact of the breach is dwarfed by other recent data thefts. In April, Sony Corp. suffered an attack on several of its networks used by consumers for video gaming, music, and movie downloads. In the same month, Texas e-mail marketing firm Epsilon Data Management LLC reported that hackers had raided its network and stolen the e-mail addresses of millions of US consumers.”

If anything, it’s the other way around. Those other episodes presented a low risk that actual sensitive data was released. The Sony breach, while involving more people, may have included names, email addresses and probably mailing addresses, but these sorts of scraps are something that criminals can often already buy or collect on their own from search engines. The Epsilon breach netted mostly email addresses. In all likelihood, that just means more phishing attempts; Something people are already inundated with unless their email provider is one of the better spam preventers.

But the labor department incident most likely included the transfer of critically sensitive information such as Social Security numbers, financial information, EINs, and work or personal history information. So let me be very clear in exactly what I’m stating. This incursion is more serious than the Sony or Epsilon breaches. It may affect tens or hundreds of thousands of MA residents and potentially thousands of MA businesses and, unlike the Sony breach, which may help identity thieves zero in on a target, the information gleaned from DUA/DCS might make it a trivial matter for thieves to hijack a person’s identity.

The initial response to the media from the labor department was a shrugging ‘Well you know… viruses, right?’ and a clearly implied wish that everyone will just move on and not make a big deal of it. As though virus/worm outbreaks are just part and parcel of having a computer. And some, it seems, including some of the media reporting the issue, are buying this wrong-headed idea. Why? Because… well, because lots of people have PCs and they get viruses all the time, right? Right. And Wrong. And part of the problem. The home computer user’s experience cannot and should not be projected onto the ‘enterprise’ computing environment. Despite the fact that the average PC user and the average business user are both using a boat with Windows written on the side does not mean that the water they sail on is the same.

That sort of thinking is what’s got us where we are. The proliferation of malware (viruses, worms, trojans, etc.) in the world is not a foregone conclusion. It’s not an endemic side-effect of owning a computer. It’s something that has grown and been fostered by a poor understanding of ‘security’, a leaning towards this sort of passive concession that it’s Computer Magic and beyond our ken and… frankly… laziness. That’s been followed up by an industry that’s happy to do the least they can get away with to band-aid the situation and entities who put their head in the sand and think slapping on an anti-virus client is good enough. And the cycle repeats. The only winners are the thieves. They win because a large portion of the United States computing population can’t be bothered to do better.

Let’s talk about particulars. One concept most PC users do not follow but every business PC environment that calls themselves security-conscious should is the ‘Principle of Least Privilege’ aka least-privileged user account (LUA). Given the notoriously malware-prone existence that Windows has lived, a corporate or government support entity who does not subscribe to this principle is just asking for it. The idea is very simple: The end-user should ordinarily be logged in with an account which has the least amount of administrative privilege possible which still allows them to do their work. In other words, require passwords and don’t log in with an administrator account. But… walk into any coffee shop in America and you can wager a safe bet that 80%-90% of the people there are doing just that.

Why is this so important? Why am I bringing it up here? And why do I assume the computers in question didn’t rely on this principle already? Simple: This one action, implementing this one policy, would have stopped the spread of this worm in the DUA/DCS computers. W32.Qakbot cannot extend its infection without the user having certain administrative privileges. And, in my opinion, this principle should not only be encouraged… it should be mandated, especially for computers that come into contact with sensitive information. I know mine are. And how many ‘inevitable’ virus/worm infestations have we dealt with in my tenure as head of this group? Zero.

I’m not saying this to imply that my network is beyond the reach of malicious computer thieves and black hat hackers. No network can ever be 100% secure. But there are certain principles and methodologies well-known and well-documented in annals of computer security that, if followed, reduce your susceptibility by leaps and bounds. But, sadly, many would rather cross their fingers, stick their heads in the sand and hope they get lucky. Well… the law of averages (another name for ‘luck’) is not on their side. Yes, your users will complain that they can’t install software without your help, but they won’t be complaining about a proliferation of viruses and malware. Because, and this is the crux of the whole principle of least privilege, if they can’t install software, malware can’t install itself. The malware only has as much privilege to modify the system as the user does (barring flaws in the operating system – that’s a wholly separate issue that we’ll not get into here). And you, the administrator, control that level of privilege.

Simple. Effective. And… ignored by the average IT outfit as being too ‘burdensome’ on the end-user. Sure, a firewall is the first line of defense when designing your network. But an anti-virus client is not the second defense, it’s the last line of defense. We’re not even concerned yet with what operating system is in the line of fire, much less what software it’s running. The second line of defense in this case is your policies and whether it’s more burdensome to inconvenience the user a little bit… or risk having the whole thing come down on your head like DCA and DCS are now experiencing.

  • If you approach your security policies as merely ‘keeping people out’, you have already failed.
  • If you approach them from the standpoint of ‘let’s assume they’re already in’, you have a chance at success.

So when CNET reports that “The agency is notifying people who may have been affected and is working with the Massachusetts attorney general’s office to investigate the breach”, I sincerely hope that part of the investigation will include looking into what made this possible from inside, not just from outside. Because there’s zero chance they’ll stop the thievery of this information. It’s already in the wild and catching the perpetrators is, now, a secondary concern given that there’s not taking back the damage. But as a MA state resident, right now I care very much about what my state government’s computing security policies are and why they’re not using every proven method available to them to safeguard our information. We have new and very specific laws in MA about how sensitive information can be transmitted, but how it’s stored and maintained by the state is equally as important.

And, as such, I feel that the Executive Office of Labor and Workforce Development has some explaining to do.

State House News Service report: Massachusetts officials disclose data breach in unemployment system
Official response: Executive Office of Labor and Workforce Development Reports…

Some remarks on how Fukushima is not Chenobyl

// March 14th, 2011 // Comments Off on Some remarks on how Fukushima is not Chenobyl // Rambling

Disclaimer: I am not a nuclear scientist nor do I work in the nuclear field. I am, however, a staunch proponent of science and work with physicists and other scientists, but I have no qualifications beyond a fairly serviceable brain and a willingness to study, learn and listen.

First, this is a great post from William Tucker at WSJ. Two thumbs up. Also, if you’d like to follow the basic facts minus the hyperbole, please visit the NEI page on the current situation at Fukushima Daiichi.

Second, I thought I’d share some comments from one of the professors from my department. Prof. Richard Wilson (bio) has been active in humanitarian aide, outreach and education for many groups and has been especially vocal about arsenic poisoning and cancers from man-made problems (including radiation exposure). That said, he’s also a realist about nuclear energy, having worked in and around the field for decades. (FYI – He will be on NECN tonight at 6 and 9) In an email to all re: the Fukushima situation, he’s said:

The reactors all shut down at the earthquake automatically unlike Chernobyl. The problem then is to cool the core since the circulating water through the steam turbine has stopped.

Just after shut down the power level is 8% of full power (plus an extra 4% in neutrinos ). This drops in a well known way. (The Wigner Wey law of 1949) Roughly exponentially. After 10 hours it is about 1% and on to 0.1% after a year.

Note there the first thing we need to set straight: The plants shut down as expected and nothing was breached. It’s a cooling issue, full stop. Chernobyl had no containment and this more modern (but still not ‘modern’) plant does. If this were “a Chernobyl”, hundreds would be dying right now.

In Japan, there was no power from the grid to operate the pumps. Emergency DC power worked for a short while. Emergency diesels started up as planned but failed after 8 hours in one plant and longer in another due to flooding. In the absence of cooling the water cooling the core began to evaporate. At the power plant there is now basically no electricity. It took perhaps an hour or two for the top of the core to be uncovered and heat up (this time must be known but I do not know it). Then the core starts heating and the chemical reaction begins between the zirconium cladding for the fuel rods and water. This disassociates the water and Hydrogen is released.

You can see where this is going.

At TMI [note: TMI = Three Mile Island] cooling was interrupted by stupid manual action almost at once and 2 hours later hydrogen was produced which caused an explosion INSIDE THE CONTAINMENT at about noon (exact time in my files) 8 hours after the initial accident. In Japan the hydrogen and other gases were vented and the explosions were OUTSIDE (where they did no important damage) and much later

Cooling of the reactor is now maintained by sea water flooding the containment. (maybe also the reactor vessel but I do not know this) This can cool the reactor core by conduction through the pressure vessel. But the water is not circulating and stem is produced which is being vented. It is claimed that there is filtering for radioactivity but I do not know this for certain.

It seems to me that this can be continued indefinitely at least until electricity is available at the site.

No ‘Chernobyl’. Not even close. Not even in the same ballpark, much less the same type of incident or type of reactor.

Note that there are still TWO (2) barriers to release of radioactivity even if the core has completely melted which I believe is unlikely.
(1) the pressure vessel which seems intact
(2) the containment.

I believe that both these will continue to hold and the only problem will be in the controlled realease. Noble gases will be released but these do not interact much in the body (you breath them in and then breathe them out) Of these krypton is the longest lived.

Cesium is normally solid and even if the containment fails, the cesium may not evaporate . (Indeed at Chernobyl which was very hot very little of the strontium evaporated and did not contribute appreciably to the radiation dose.)

The highest recorded dose so far in the region of the plant is 150 mrem/hr (1.5 mSv per hour ). The natural back ground is about 300 mRem/yr Acceptable one time dose for accidents is 80 Rems for an astronaut and 20-40 Rems for a clean up worker.

And, finally, Dick’s prediction:

No one in the public will get acute radiation sickness and probably no one in the reactor staff either
No 0ne will have problems from iodine ingestion
There will be minimal cesium releases and not one fatal cancer will be CALCULATED (using the standard pessimistic formula) from the doses to the public.

This is to be compared to 1,000-10,000 direct, measurable and definite deaths from other earthquake problems

So, to all the Chicken Littles out there saying we have to stop and ‘review’ our nuclear programs (“ohmygawd ohmygawd!”) because of an incident that its so far outside the norm as to be unique, let me remind you of two things:

1) We’re CONSTANTLY reviewing our designs and programs. That’s how science works. And safety is THE primary review concern in nuclear energy production. Do you seriously think they’ve overlooked earthquakes??? That’s why we’ve gone on to design new and safer plants… so they’ll be as safe as possible. And they can be as safe as you’re willing to allow. Unless you also want to be stingy and would rather continue to dump pollutants into the atmosphere until we run out of oil.

2) “As possible” – Nothing can be made 100% safe. Everything is a weighing of risks against needs. That said, I didn’t see anyone calling for the halt and review of petroleum energy when BP polluted miles of ocean and coastal regions. No one’s called for a halt to automobile and coal pollution to review the deaths it causes each year.

I’ve heard several people toss out the “but what about a worst case scenario???” to which I want to shout “THIS IS THE WORST CASE SCENARIO!” And so far, it’s being contained despite the age of the technology and the crumbling of the infrastructure around the plants. This quake was unlike anything before it, and yet, despite the events leading to it being ‘worst case’, the crisis at the plants is not worst case.

What does your call for a ‘moratorium’ hope to do? Address the obvious? Ask the same questions that are asked every day in meetings and design reviews which seek to create safe, clean energy? No, it’s histrionics seasoned with a little bit of good ol’ political grandstanding. The incident at Fukushima is certainly worrisome, but it’s not an indictment of nuclear energy.

And it’s certainly no Chernobyl. Humans & Science 1, Histrionics & Emotion 0.