Blog. And sometimes just whinging.
Walking Eye, Hank! They’re all the same.
// November 27th, 2011 // Comments Off on Walking Eye, Hank! They’re all the same. // tech
I’ve been working on a robot for a while. Well, a specific robot. I’ve tinkered with some others prior to and along the way. But for this particular bot, I started fiddling with servos and controllers for an arm last winter and since then I’ve bought a bunch of micro-controllers and itsy bitsy computers to fiddle with too. The results of all that fiddling have been sort of percolating in my head and has recently, in long bursts of work, been spat out into this, the Tiny Walking Eye. I never intended to do a pre-design, per se, and I’ve let all the ideas sort of clump together so that I knew roughly what I wanted to build, just not exactly how I’d build it. I built an eye. Then an arm. Then… I built all that you see below in a couple of long, late nights. Given that I ‘made it up as I went’, I’m fairly pleased with the aesthetics of, so far, it as well. Nobody wants an ugly robot.
OK, it’s not actually tiny, it’s about 22 inches high at the top of the video camera. And it doesn’t walk, it has 4 drive wheels in a differential (‘tank’) steering configuration. And the eye is a camera. But my friend Christopher (who I also do a podcast with) and I were making Venture Brothers jokes and I got fixated on “Giant Walking Eye” and so TWE was christened.
The chassis is a Dagu 4WD Wild Thumper bot chassis (ordered from Pololu) which I’ve extended a few inches. In hindsight, the 6WD chassis would have been better. Maybe for TWE2. The shell is foam PVC sheeting which is easy to cut for someone who lives in an apartment and doesn’t want to annoy her neighbors (more than she does already). Also, it’s very light and I need to keep everything that sits on the chassis under about 10 pounds. The chassis undercarriage has a power distro box with motor controllers, an emergency cut-off switch and two 7.2v batteries in parallel for the motors.
There’s a Mars Rover-esque platform that sits on a revolving turret on the chassis. On top of that is the brain box which also houses the front-facing arm. The arm is also made from .157 PVC and is powered by tandem servos (one reversed so that they lift in concert) for shoulder and elbow. The two large-scale shoulder servos are Hi-Tecs and the two in the elbow are some crazy Chinese servos I found on Amazon which have huge amounts of torque and metal gearing. The elbow will do most of the work by itself and the should is only needed when the arm needs to be extended. The wrist ends with a gripper which I bought from Parallax or RobotShop or somewhere. I’ll be using Phidgets or Pololu controllers for the servos (depending on which ‘network’ I end up using – more below).
The white PVC and angled cuts give it a sort of 70s sensibility and I’m OK with that. Plus, you should note that there’s nothing on the side yet and there will be. There’s only one ultrasound range finder right now, but I’ll be putting 5 more on (one on each angled corner and on in the rear) as well as some other goodies on the side. And in the rear will be a little boot to put the secondary batteries in (two 6v totaling ~ 9Ah for the processors and controllers).
Up top there’s a camera which pans and tilts and immediately behind that is a 7″ LCD display. The display will be hooked into the core micro-controller (probably a Parallax Propellor board) which may or may not also be hooked to an ultra-small PC (a Gumstix Fire or a Genesi Efica – both of which I’m playing with). This all depends on how I want the robot to be controlled. Without the PC, I’d be making it completely autonomous (maybe an Xbee for remote control or logging). With the PC, I’d be able to store and execute more complex code and also tie back via Wi-Fi to another computer where I could assume manual control if desired. I haven’t decided. Maybe I’ll try to make it do both. It all comes down to software and batteries.
Additionally, I need to decide how this will all be connected. Depending on which controllers and computers end up in side, it will either be primarily USB, Ethernet or a mixture of I²C and USB. I’ve mocked up both and there are benefits to each. We’ll see. USB is winning at the moment given that all the controllers already inside have USB ports and everything else could be wired to the micro-controller (which also has USB).
Then, I need to figure out how it recharges itself. That’ll involve building a charging circuit for the various battery systems and a station it can find on its own (probably using RFID triangulation).
ANYway… that’s the state of TWE. In case you were wondering. Which you totally were. Hope you enjoyed. Cuz… everybody needs a robot. For… ‘reconnaissance applications’.
UPDATE FROM THE FUTURE!
Sorry I didn’t update this more as TWE progressed. I moved to a real shop at the Artisan’s Asylum and worked on TWE in fits and bursts. I put in control systems for the drivetrain, arm and sensors. Played around with ROS and OpenQbo and whatnot. All in all, TWE was a fair success (there were things I could have done much better, admittedly) and, admit it, he’s adorable.
But then, alas, other projects got in the way and TWE ended up on a shelf. I eventually pulled out and loaned the drivetrain to Brandon from Rascal Micro for a while to do some demos on and I pulled a few things out here and there for other projects.
BUT! Don’t be sad. This week I took the drivetrain out and reprogrammed the controllers. I removed the arm from what’s left of TWE’s chassis (that arm turned out good, dude. Seriously.), the dual cameras, and am tinkering with a different idea now. It’s already got a name of course. Because I have to name everything. SHANE — I was running the drive train up and down the aisle at the Asylum using an RC controller and, joking to a fellow inmate, I called ‘Come back, Shane!’ And that’s how names are gotten. ;)
Anywho. TWE has left us, long live TWE. But something new to come. :)
What Am I Ticked Off About re: Mozilla/Firefox?
// November 17th, 2011 // 2 Comments » // Rants, tech
What Am I Pissed Off About re: Mozilla/Firefox?
EDIT: I’m very grateful to Mozilla for listening and eventually creating the ESR track for both Firefox and Thunderbird. This ( http://www.mozilla.org/en-US/firefox/organizations/ ) effectively fixes all the below for us.
EDIT 12/2/11: heh… Looks like I’m not alone in my thoughts. Comments at Slashdot on Firefox losing market share [image].
Mozilla is fighting an invisible battle against Google Chrome. They’ve implemented a ‘me too’ rapid release cycle for Firefox (and therefore also Thunderbird since they have [again artificially] tied their cycles together) in answer to Google’s rapid release cycle.
And the poop started hitting the fan. Not only was the public confused (“OMG! My browser’s really old! I only have 3.6 and they’re already up to 6! Was I asleep for a year?”) but enterprise IT folks were not amused. We can’t afford to have a browser we just deployed be declared un-supported mere weeks later. Similar remarks here: http://mike.kaply.com/2011/06/23/understanding-the-corporate-impact/
Yes, there is a working group that was put together after Mozilla finally admitted that enterprise IT had a valid point ( http://www.readwriteweb.com/hack/2011/08/mozilla-chair-acknowledges-ent.php )… in August 2011 after the release of version 6… two more major releases have come out since then. But right now there’s just an ESR proposal and… that’s where we stand. In the meantime, time continues to go forward at the same pace and we’re still dealing with actually using the browser. We esentially had ESR, then Mozilla took it away to go tilt at a windmill called Chrome. Now we wait while people talk about ESR… or we don’t wait and we move on.
We want to love you, Firefox! Why won’t you let us love you!??
The browser we’d fought for, the browser that finally took away share from IE, the browser that worked across platform and became popular enough for sites to start to say “OK, we support Firefox too.” That browser’s maker has seemingly turned into a parody of Microsoft trying to keep up with [Apple/Google/etc. and yes, even Mozilla] when they’d clumsily announces after the fact “Oh, yeah, we’re gonna do that too!” Now I have users who used to complain maybe about a website complaining about the browser.
So now, no more stable release followed by a cycle of improvements and bug fixes (all the while being supported because the ordinal number up front hasn’t changed and won’t change until the next release goes stable and comes out of beta). Now it’s release, release, release and pray to bob the bug fixed in 5 doesn’t show up again in the ‘all new super hot off the press’ 8.
And, most importantly, this all loses sight of how the browser wars ended. They ended with Firefox the moral and spiritual victor on one solid principal: Build a better browser and people will use it. Goliath IE was slain (or at leads severely maimed and forced to also get better) by one simple principal: Build a better browser and people will use it. Did I mention “Build a better browser and people will use it”? Not “OMGZ googlez has bilt a browzer and they’s gonna take all our search eyeballs moneys! Run around in circles!!!”
Now Firefox is so effing scared that they’ll lose that sweet Google search eyeballs cash that they’re all but making it a self-fulfilling prophecy in their panic. ( http://www.conceivablytech.com/9419/business/browser-market-share-forecast-update-firefox-losses-accelerate ) Why? Because Google planted that idea in their head when they released Chrome and now Mozilla’s management can’t see past it. It’s like a bug in their brain that’s making them crazy. (“This is Ceit Alpha V!”) They are so fixated on the forest they don’t see the trees catching fire. But the truth is that Google will keep paying out that cash as long as Firefox brings in eyeballs. That is, unless Mozilla gets so panic’d they start acting like headless chickens and _manage to drive all its customers away_!
Which is exactly what I think might be happening. Hell, I’M using Chrome now because I just can’t take it any more (and Safari is in the crapper too as far as I’m concerned – so I don’t have much choice… in a world that used to be all about choice).
Now, my team is forced to sit down and talk about “What browser do we support officially if/when Firefox doesn’t get back on track. Also, we’re screwed email client-wise if Thunderbird ends up under the bus for no good reason.” My server guy… my poor staunch advocate for open source and non-big brothery software is forced to admit that we might have to consider Chrome! He wants to love you, Firefox! Hell, he does love you. But his love is wavering. So what exactly is wrong? Sheesh, where to begin. And, honestly, I’ll forget something. It’s all become a blurry laundry list of complaints from minor annoyances to show-stopping bugs (Stack space errors?? Really?? In 2011?). But, quickly and anecdotally, go google this:
http://www.google.com/search?q=firefox+switch+to+chrome
Those people? They’re not switching to Chrome because Chrome is sexy or amazing… largely you’ll see them saying that they are leaving Firefox because of Firefox’s problems or short-comings, not Chrome’s features. OK, on to my gripes as an enterprise (education, actually, but we work the same and expect the same) IT shop.
* Instability. We’ve gone from a stable Firefox (sure, it had its quirks, but stable enough for us to say “we support Firefox” and be able to stand by it) to having to say “well, if you’re having problems in Firefox, you may have to use Safari/IE for that”. And then bracing for the next release 6 weeks later. (In all honesty, we’re just leaving most people on 3.6.x)
* Page rendering and slowness. This has forced us to downgrade some users who just can’t deal with it to 3.6.x And we’re clearly not alone: http://www.zdnet.com/blog/hardware/firefox-36-is-mozillas-windows-xp/16098?tag=rbxccnbzd1
And, tellingly, you’ll still find a link to 3.6.24 on Mozilla’s download site. Even they, tacitly admit there’s still a reason for it to be there:
http://www.mozilla.org/en-US/firefox/all.html
* Let’s talk about slowness. How can it be that Chrome got faster and Firefox got slower? ZDnet sure thinks so. Compare these two Kraken scores:
You’re killing yourself, Mozilla. No excuses, no waffling. You. Are. Killing. Yourself.
* New weirdness depending on if you’re on 6 or 7 or 8. Profiles being trashed, bookmarks reverting or disappearing… What works in 7 might not work in 8. What was fixed in 7 from 6 seems to once again affect 8. And boy is it RAM hungry. But it was i/o hungry before, so that’s probably a step forward for users with networked hime directories… Submit crash report, submit crash report, submit crash report.
* The artificial rapid release cycle creating browser instability is also unnecessarily affecting Thunderbird. For us, Thunderbird 8 is unusable. It _simply does not work for some users_. Add an IMAP account with lots of folders and mail and it crashes at startup. Get someone with less mail and it’s fine (but Lightning may or may not work). Submit crash report, submit crash report, submit crash report.
* The rapid release cycle also tends to break plugin/add-ons, often for no other reason than the fact that this version, which isn’t much different, starts with a different number. We even saw Thunderbird run into this day of release when we rushed to test it. In my case, instead of bringing Lightning with it, it disabled the already-installed lightning add-on and then refused to upgrade (Lightning will be upgraded on next restart -> restart -> Lightning will be upgraded on next restart -> removed lightning manually -> install lightning -> Lightning is not compatible with this version (WTF?) -> clear everything out -> install, go to add-ons, aha! Lightning link in featured add-ons -> install Lightning -> Lightning will be installed on next restart -> restart Lightning will be upgraded on next restart… give up.) That’s… crazy. This is Mozilla we’re talking about…
Dammit… we were pinning our hopes on integrating Lightning into our environment to stem the tide of requests for Outlook for those who just wanted calendaring of some sort. Now we have a 1.0 release of Lightning for a version of Thunderbird we can’t even deploy. ARGH! Because of Firefox chasing the Chrome around like a big dumb puppy chasing a car. (“It must want to eat my food! GRR! Chase!”)
I think Mozilla has lost their minds. Please. Please. Go find your minds and put them back in before you lose all that you’ve worked and fought so hard for (and we’ve supported so strongly) because you got a little scared by some actual competition. This coming from someone who wants you to succeed. Who’s begging you to succeed. I’m your fan. Your cheerleader. And now I’m about to break up with you because… you won’t let me love you!
Additional reading from way back at version 5 (oh, wait, that wasn’t that long ago…)
http://www.conceivablytech.com/8102/business/should-mozilla-ditch-the-rapid-release-cycle-again
Getting to your iCloud calendar from iCal 4 (OSX10.6) or a CalDAV client
// October 13th, 2011 // 67 Comments » // Rambling, tech
UPDATED 10/15/11 with new instructions!
I work in an environment where all the machines are tied to a single sign-on system and all the users, be they Mac, PC or Linux, have their home directories mounted from a server at login. Right now, OSX Lion won’t work in that environment, so all our Macs are running 10.6 or 10.5.8.
But what if I want to use my iCloud calendar from work via iCal (or another CalDAV capable client**)? It’s pretty damned easy, actually, I’m happy to say.
Maybe this is published somewhere, maybe not. But I figure a couple of my peeps might benefit from me posting this up. So here goes.
- Get your calendar set up and up to date in iCloud first. Don’t monkey with doing that after the fact.
- It just got easier. Skip to step 10 and ignore the steps below that says [SKIP]
- [SKIP] Open icloud.com in a web browser and go to your calendars. Click on the circular ‘wireless’ icon to the right of the name of the calendar you want to use. The calendar you want to use must be shared.
- [SKIP] Note the name of the server right after webcal:// (example: p02-www.icloud.com)
- Open iCal 3. (I’ll be referring to iCal from here on, I can’t say for sure how other CalDAV clients will respond).
- In iCal, go to Preferences -> Accounts and click the add account button (+)
- Select CalDAV as the account type.
- Enter your iCloud username (for instance, steve@mac.com) and password
- [SKIP] For server address you need to slightly modify that server name you jotted down in step 3
If the server was p02-www.icloud.com, you would replace www with caldav and enter p02-caldav.icloud.com - For the server address simply enter “caldav.icloud.com” (I don’t know when this started working, but it does.)
- Click create. If presented with a choice of two possible servers, choose the one that says caldav.icloud.com, not cal.me.com — IF YOU GET AN ACCESS NOT PERMITTED ERROR then you’ll need to use the greyed out instructions instead.
- Live large. Your now have your iCloud calendar and reminders in iCal. You might want to change how it refreshes, if you’re like me and want control over that. Push may not work as well in iCal 3. Otherwise, it’s a full CalDAV implementation; add, delete, modify, etc.
** Update: I haven’t been able to get it working in Lightning/Sunbird yet. But it’s most likely a matter of forming the URI correctly. It should be somehting along the lines of:
https://pXX-caldav.icloud.com:443/[unique ID]/principal/
or some variation thereof. I’ll try to work on this more tomorrow.
Update 2: It appears they’re also using CardDAV for contacts (hooray for standards!). The path for that would start https://pXX-contacts.icloud.com/[unique ID]/carddavhome (Thanks MacRumors forums!)
As of 10/15 6:30pm EDT I have NOT been able to get this working in Address Book 5. If you want to take a stab at it, I do know that Address Book 6 uses a URI like:
https://[username]%40mac.com@pXX-contacts.icloud.com:443/[unique ID#]/carddavhome/card/[long srting].vcf
(The %40
being necessary as you can’t have two @ in there but need to include an email address as a username.)
Update 3: So they’re not using a SRV record to do it as far as I can tell (but they are using Akamai so there’s at least one layer of abstraction). Next…
Yours in nerdery,
Maggie
The Principle of Least Privilege – A Failure in MA
// May 18th, 2011 // Comments Off on The Principle of Least Privilege – A Failure in MA // Rants
[cross-posted to my blog at Berkman/Harvard Law Weblogs]
Disclaimer: I am not a lawyer, nor do my opinions represent that of Harvard Physics, Harvard Law or Harvard University. What I am is a computing professional and technologist. A sometimes outraged one. As a result, some of what follows may be a bit snide. I can’t apologize just yet for that. Past the outrage, I’m hoping that something good will come from this incident… although I rather doubt it.
The Incident:
On April 20th, 2011 around 1,500 computers in the Massachusetts labor department’s Departments of Unemployment Assistance (DUA) and Career Services (DCS) were found to be infected with a [allegedly] new variant of a well-known Windows worm (not a virus as has been reported) called W32.Qakbot. From some prior date — they say April 19th, but I don’t find the idea that they know when the initial infection occurred convincing given other facts — until around May 13th (or May 16th, according to another report), information entered or accessed on these machines may have been intercepted by the worm for transmission to an unknown recipient.
The Response:
The Executive Office of Labor and Workforce Development reported this incident on May 17th. That’s 28 days until they notified the public or state officials. Call it four weeks, call it nearly a month, but either way it’s too long and clearly at odds with state law which requires that any such break-in be reported to the Attorney General’s office “as soon as practicable and without unreasonable delay”. There is absolutely no reason this could not have been reported sooner… except, perhaps, incompetence and/or fear. In their official statement it’s claimed that “all possible actions have been taken to minimize the impact to the Commonwealth’s constituents”, but this is clearly in error as “all possible actions” would have included notifying the AG immediately.
And I’m afraid I have to take the Boston Globe to task too. In its report on the incident it said:
“The potential impact of the breach is dwarfed by other recent data thefts. In April, Sony Corp. suffered an attack on several of its networks used by consumers for video gaming, music, and movie downloads. In the same month, Texas e-mail marketing firm Epsilon Data Management LLC reported that hackers had raided its network and stolen the e-mail addresses of millions of US consumers.”
If anything, it’s the other way around. Those other episodes presented a low risk that actual sensitive data was released. The Sony breach, while involving more people, may have included names, email addresses and probably mailing addresses, but these sorts of scraps are something that criminals can often already buy or collect on their own from search engines. The Epsilon breach netted mostly email addresses. In all likelihood, that just means more phishing attempts; Something people are already inundated with unless their email provider is one of the better spam preventers.
But the labor department incident most likely included the transfer of critically sensitive information such as Social Security numbers, financial information, EINs, and work or personal history information. So let me be very clear in exactly what I’m stating. This incursion is more serious than the Sony or Epsilon breaches. It may affect tens or hundreds of thousands of MA residents and potentially thousands of MA businesses and, unlike the Sony breach, which may help identity thieves zero in on a target, the information gleaned from DUA/DCS might make it a trivial matter for thieves to hijack a person’s identity.
The initial response to the media from the labor department was a shrugging ‘Well you know… viruses, right?’ and a clearly implied wish that everyone will just move on and not make a big deal of it. As though virus/worm outbreaks are just part and parcel of having a computer. And some, it seems, including some of the media reporting the issue, are buying this wrong-headed idea. Why? Because… well, because lots of people have PCs and they get viruses all the time, right? Right. And Wrong. And part of the problem. The home computer user’s experience cannot and should not be projected onto the ‘enterprise’ computing environment. Despite the fact that the average PC user and the average business user are both using a boat with Windows written on the side does not mean that the water they sail on is the same.
That sort of thinking is what’s got us where we are. The proliferation of malware (viruses, worms, trojans, etc.) in the world is not a foregone conclusion. It’s not an endemic side-effect of owning a computer. It’s something that has grown and been fostered by a poor understanding of ‘security’, a leaning towards this sort of passive concession that it’s Computer Magic and beyond our ken and… frankly… laziness. That’s been followed up by an industry that’s happy to do the least they can get away with to band-aid the situation and entities who put their head in the sand and think slapping on an anti-virus client is good enough. And the cycle repeats. The only winners are the thieves. They win because a large portion of the United States computing population can’t be bothered to do better.
Let’s talk about particulars. One concept most PC users do not follow but every business PC environment that calls themselves security-conscious should is the ‘Principle of Least Privilege’ aka least-privileged user account (LUA). Given the notoriously malware-prone existence that Windows has lived, a corporate or government support entity who does not subscribe to this principle is just asking for it. The idea is very simple: The end-user should ordinarily be logged in with an account which has the least amount of administrative privilege possible which still allows them to do their work. In other words, require passwords and don’t log in with an administrator account. But… walk into any coffee shop in America and you can wager a safe bet that 80%-90% of the people there are doing just that.
Why is this so important? Why am I bringing it up here? And why do I assume the computers in question didn’t rely on this principle already? Simple: This one action, implementing this one policy, would have stopped the spread of this worm in the DUA/DCS computers. W32.Qakbot cannot extend its infection without the user having certain administrative privileges. And, in my opinion, this principle should not only be encouraged… it should be mandated, especially for computers that come into contact with sensitive information. I know mine are. And how many ‘inevitable’ virus/worm infestations have we dealt with in my tenure as head of this group? Zero.
I’m not saying this to imply that my network is beyond the reach of malicious computer thieves and black hat hackers. No network can ever be 100% secure. But there are certain principles and methodologies well-known and well-documented in annals of computer security that, if followed, reduce your susceptibility by leaps and bounds. But, sadly, many would rather cross their fingers, stick their heads in the sand and hope they get lucky. Well… the law of averages (another name for ‘luck’) is not on their side. Yes, your users will complain that they can’t install software without your help, but they won’t be complaining about a proliferation of viruses and malware. Because, and this is the crux of the whole principle of least privilege, if they can’t install software, malware can’t install itself. The malware only has as much privilege to modify the system as the user does (barring flaws in the operating system – that’s a wholly separate issue that we’ll not get into here). And you, the administrator, control that level of privilege.
Simple. Effective. And… ignored by the average IT outfit as being too ‘burdensome’ on the end-user. Sure, a firewall is the first line of defense when designing your network. But an anti-virus client is not the second defense, it’s the last line of defense. We’re not even concerned yet with what operating system is in the line of fire, much less what software it’s running. The second line of defense in this case is your policies and whether it’s more burdensome to inconvenience the user a little bit… or risk having the whole thing come down on your head like DCA and DCS are now experiencing.
- If you approach your security policies as merely ‘keeping people out’, you have already failed.
- If you approach them from the standpoint of ‘let’s assume they’re already in’, you have a chance at success.
So when CNET reports that “The agency is notifying people who may have been affected and is working with the Massachusetts attorney general’s office to investigate the breach”, I sincerely hope that part of the investigation will include looking into what made this possible from inside, not just from outside. Because there’s zero chance they’ll stop the thievery of this information. It’s already in the wild and catching the perpetrators is, now, a secondary concern given that there’s not taking back the damage. But as a MA state resident, right now I care very much about what my state government’s computing security policies are and why they’re not using every proven method available to them to safeguard our information. We have new and very specific laws in MA about how sensitive information can be transmitted, but how it’s stored and maintained by the state is equally as important.
And, as such, I feel that the Executive Office of Labor and Workforce Development has some explaining to do.
State House News Service report: Massachusetts officials disclose data breach in unemployment system
Official response: Executive Office of Labor and Workforce Development Reports…
Some remarks on how Fukushima is not Chenobyl
// March 14th, 2011 // Comments Off on Some remarks on how Fukushima is not Chenobyl // Rambling
Disclaimer: I am not a nuclear scientist nor do I work in the nuclear field. I am, however, a staunch proponent of science and work with physicists and other scientists, but I have no qualifications beyond a fairly serviceable brain and a willingness to study, learn and listen.
First, this is a great post from William Tucker at WSJ. Two thumbs up. Also, if you’d like to follow the basic facts minus the hyperbole, please visit the NEI page on the current situation at Fukushima Daiichi.
Second, I thought I’d share some comments from one of the professors from my department. Prof. Richard Wilson (bio) has been active in humanitarian aide, outreach and education for many groups and has been especially vocal about arsenic poisoning and cancers from man-made problems (including radiation exposure). That said, he’s also a realist about nuclear energy, having worked in and around the field for decades. (FYI – He will be on NECN tonight at 6 and 9) In an email to all re: the Fukushima situation, he’s said:
The reactors all shut down at the earthquake automatically unlike Chernobyl. The problem then is to cool the core since the circulating water through the steam turbine has stopped.
Just after shut down the power level is 8% of full power (plus an extra 4% in neutrinos ). This drops in a well known way. (The Wigner Wey law of 1949) Roughly exponentially. After 10 hours it is about 1% and on to 0.1% after a year.
Note there the first thing we need to set straight: The plants shut down as expected and nothing was breached. It’s a cooling issue, full stop. Chernobyl had no containment and this more modern (but still not ‘modern’) plant does. If this were “a Chernobyl”, hundreds would be dying right now.
In Japan, there was no power from the grid to operate the pumps. Emergency DC power worked for a short while. Emergency diesels started up as planned but failed after 8 hours in one plant and longer in another due to flooding. In the absence of cooling the water cooling the core began to evaporate. At the power plant there is now basically no electricity. It took perhaps an hour or two for the top of the core to be uncovered and heat up (this time must be known but I do not know it). Then the core starts heating and the chemical reaction begins between the zirconium cladding for the fuel rods and water. This disassociates the water and Hydrogen is released.
You can see where this is going.
At TMI [note: TMI = Three Mile Island] cooling was interrupted by stupid manual action almost at once and 2 hours later hydrogen was produced which caused an explosion INSIDE THE CONTAINMENT at about noon (exact time in my files) 8 hours after the initial accident. In Japan the hydrogen and other gases were vented and the explosions were OUTSIDE (where they did no important damage) and much later
Cooling of the reactor is now maintained by sea water flooding the containment. (maybe also the reactor vessel but I do not know this) This can cool the reactor core by conduction through the pressure vessel. But the water is not circulating and stem is produced which is being vented. It is claimed that there is filtering for radioactivity but I do not know this for certain.
It seems to me that this can be continued indefinitely at least until electricity is available at the site.
No ‘Chernobyl’. Not even close. Not even in the same ballpark, much less the same type of incident or type of reactor.
Note that there are still TWO (2) barriers to release of radioactivity even if the core has completely melted which I believe is unlikely.
(1) the pressure vessel which seems intact
(2) the containment.I believe that both these will continue to hold and the only problem will be in the controlled realease. Noble gases will be released but these do not interact much in the body (you breath them in and then breathe them out) Of these krypton is the longest lived.
Cesium is normally solid and even if the containment fails, the cesium may not evaporate . (Indeed at Chernobyl which was very hot very little of the strontium evaporated and did not contribute appreciably to the radiation dose.)
The highest recorded dose so far in the region of the plant is 150 mrem/hr (1.5 mSv per hour ). The natural back ground is about 300 mRem/yr Acceptable one time dose for accidents is 80 Rems for an astronaut and 20-40 Rems for a clean up worker.
And, finally, Dick’s prediction:
MY PREDICTION
No one in the public will get acute radiation sickness and probably no one in the reactor staff either
No 0ne will have problems from iodine ingestion
There will be minimal cesium releases and not one fatal cancer will be CALCULATED (using the standard pessimistic formula) from the doses to the public.This is to be compared to 1,000-10,000 direct, measurable and definite deaths from other earthquake problems
So, to all the Chicken Littles out there saying we have to stop and ‘review’ our nuclear programs (“ohmygawd ohmygawd!”) because of an incident that its so far outside the norm as to be unique, let me remind you of two things:
1) We’re CONSTANTLY reviewing our designs and programs. That’s how science works. And safety is THE primary review concern in nuclear energy production. Do you seriously think they’ve overlooked earthquakes??? That’s why we’ve gone on to design new and safer plants… so they’ll be as safe as possible. And they can be as safe as you’re willing to allow. Unless you also want to be stingy and would rather continue to dump pollutants into the atmosphere until we run out of oil.
2) “As possible” – Nothing can be made 100% safe. Everything is a weighing of risks against needs. That said, I didn’t see anyone calling for the halt and review of petroleum energy when BP polluted miles of ocean and coastal regions. No one’s called for a halt to automobile and coal pollution to review the deaths it causes each year.
I’ve heard several people toss out the “but what about a worst case scenario???” to which I want to shout “THIS IS THE WORST CASE SCENARIO!” And so far, it’s being contained despite the age of the technology and the crumbling of the infrastructure around the plants. This quake was unlike anything before it, and yet, despite the events leading to it being ‘worst case’, the crisis at the plants is not worst case.
What does your call for a ‘moratorium’ hope to do? Address the obvious? Ask the same questions that are asked every day in meetings and design reviews which seek to create safe, clean energy? No, it’s histrionics seasoned with a little bit of good ol’ political grandstanding. The incident at Fukushima is certainly worrisome, but it’s not an indictment of nuclear energy.
And it’s certainly no Chernobyl. Humans & Science 1, Histrionics & Emotion 0.
It’s the simple things…
// February 4th, 2011 // 1 Comment » // Rambling, tech
I have a scratch volume consisting of several drives in a software RAID setup on my Mac Pro at work. One of the more annoying things is that when I set it about ingesting something, toddle off to do other work while it does so and then come back more or less when it’s done, inevitably the drives will have spun down and I’ll have to wait for them to spin back up. It’s not a long wait, but it’s annoying one. Especially as I can’t, say, eject the camera that’s connected while they’re spinning up.
Boring. GimmeGimmeGimme. NowNowNow.
I don’t want to disable drive sleep because the machine does sit idle sometimes. So what’s a girl to do? Google the pmset options and figure out how to fix this annoyance, that’s what. Turns out, it’s totally configurable (when you have “Put the hard disk(s) to sleep when possible” enabled):
sudo pmset -a disksleep 40
You can take a look at what your basic settings currently are with:
sudo pmset -g
Now my disks won’t spin down for 40 minutes. So if I wander off to lunch, they will spin down. If I’m just taking a bit too long to get back to an ingest/capture/encode, I don’t have to wait for each drive to spin up and the volume (and machine) to become ready. This works for any drives that are directly controlled by the machine (be it a Mac Pro or a laptop). It’s not relevant to any RAID which is hung off a controller card or enclosure with an in-built controller.
Just thought I’d share. Especially since I haven’t posted anything here in ages…
M
Talking up to your customers
// August 19th, 2010 // Comments Off on Talking up to your customers // Rambling, tech
Frankly, I think RED could have handled their current RED-DRIVE/RED-RAM supply (or lack thereof) problem a bit better than it has. Case in point, the latest which amounts to ‘stop whining’.
The latest response -or- Read from the start
(Sorry to single you out, Brent, but the sword is in your hands and you’ve fallen on it willingly.)
I made an off-the-cuff remark that they’d been taking customer service lessons from AT&T; a demeanor often described as “we’re not happy ’til you’re not happy”. But, that’s actually a bit harsh in that, frankly, I don’t think they’re even thinking about their ‘front-facing’ appearance at all. I don’t follow RED like I used to (mostly because the product I am/was interested in hasn’t shipped yet) but I still watch with interest when they make announcements. They are unique in their customer interfacing practices. And this time… I don’t think it’s gone very well. So, RED, here’s a favour from me to you:
My idea of a better reply to RED customers about the removal of large storage options from the RED store before any replacement option is available. Let me see if I can help you lads with some ’empathy for your customers 101′.
(JUST TO BE 100% CLEAR: THIS IS ME WRITING, NOT RED. I DON’T WORK FOR RED.)
I’d like to try to answer some of your criticisms and concerns as best I can. While it’s clear that, by the very fact that you as customers have raised this issue, there’s a real concern over the recent removal of RED-DRIVE and RED-RAM from our store, we want to explain as best we can why this was necessary. As you know RED has traditionally been on the bleeding edge of camera technology and we do this with a small staff compared to other companies. As such, sometimes we have to make decisions that shift our focus and this sometimes means moving engineering and product work to focus on new, better technologies for the future.
As most of you no doubt know, we’re right on the cusp of releasing some new hardware that will change how many of our existing accessories relate to the overall line. And may change many things about shooting with our cameras, not the least of which are storage requirements. Part of our change of focus, along with the normal supply issues we face with accessories that require outside vendors (drives, flash, etc.) has caused us to, some feel prematurely, stop taking new orders for RED-DRIVE and RED-RAM (we are still fulfilling existing order and providing servicing). We want to assure you that relief is on the way in the form of larger 64GB CF cards as well as some new storage technologies we can’t discuss just yet. We hope you’ll understand the latter and trust us on that.
We’ve often asked you – our working customers who, we understand, have immediate needs in your day-to-day work – to wait, to be patient and trust us and that we would make it all worthwhile in the end. As with the RED-ONE, which was a long but ultimately fulfilling wait for many of you, we hope you’ll bear with us through this growth bump as we re-tool for an even more awesome future. Please watch the site and forums and we’ll keep you posted when new storage solutions are ready to shop.
Thanks,
RED
The [unusual] Conversation
// June 17th, 2010 // Comments Off on The [unusual] Conversation // Skepticism
I was on The Conversation with Dan Benjamin and Jason Seifer this week. We talked about… UFOs. No, really. We did! I’m no expert on the subject, but I think it went well. I knew more than I thought I did. Having been a believer in aliens and ghosts when I was young and reading a lot of the literature seems to have helped.
Or… Am I just part of the shadowy conspiracy!
I guess you’ll just have to watch* or listen and decide for yourself: http://5by5.tv/conversation/19
On a related (sort of) note, I’ll be speaking at this year’s The Amazing Meeting 8 on a workshop/panel with a few fellow grassroots organizers/worker bees. NOT about UFOs, but about using technology to promote and organize skeptics groups at the grassroots. Wish me luck.
* – Sorry anyone who had imagined I was some lovely young thing. The webcam will bear out… I’m just a fat not-as-young-anymore butch chick with a lot of dye in my hair and metal in her skin.
The Setup aka UsesThis.com
// May 16th, 2010 // 3 Comments » // Represent, tech
Good heavens. Through some sort of temporarl anomaly or hole in spacetime, I’ve wound up on The Setup (usesthis.com).
Read it here, if you insist: http://maggie.mcfee.usesthis.com/
In unrelated news, I just posted a candid ‘behind the scenes’ video of James “The Amazing” Randi from a shoot at NECSS 2010 we just did: http://aggravatedmedia.com.
The RAID post that was
// March 1st, 2010 // Comments Off on The RAID post that was // Rambling, tech
My RAID post had a glaring and awful error in it (thank you to the commenters who pointed it out) as well as an implication in the graph I didn’t intend and so I’ve removed it. I don’t feel that leaving inaccurate information up is helpful to anyone. Thanks again to the commenters who pointed out my mistakes. I hope to re-visit the post at some point and I’ll be sure to give credit where it’s due. – Maggie