How much worse will the ObamaCare website glitches get?
posted at 12:01 pm on October 7, 2013 by Allahpundit
Said O this weekend of his massive technological faceplant, “Folks are working around the clock and have been systematically reducing the wait times.” No doubt that’s true; the shutdown might stop the feds from manning the Amber Alert site, but rest assured they’ll find a loophole to keep repairs to Healthcare.gov humming. But how much is left to repair? Ezra Klein’s WaPo Wonkblog has a useful user-friendly interview with a tech expert explaining that the O-Care website has two major problems, one of which is easily resolved and one of which isn’t. The easy one is scalability, i.e. adding more servers to handle the massive traffic flowing to the site. (Why HHS underestimated that traffic so badly is a mystery to everyone except them.) The other one is coding, which was always going to be difficult with an undertaking as massive as ObamaCare even if the site administrators were rigorous about debugging before the rollout.
They … were not rigorous:
Most of the problems like these are in the software. Hardware is the easy part. You can add more hardware and do it easily. Software takes more time. In the rush of getting this out, it seems like testing wasn’t done completely. My expectations from this is that these problems should go away in the next few weeks. The site still won’t be as fast as something like Netflix, but it should work.
The founder of FMS, a software firm, came to the same conclusion when trying to access the site last Tuesday, when it launched. Quote:
What should clearly be an enterprise quality, highly scalable software application, felt like it wouldn’t pass a basic code review. It appears the people who built the site don’t know what they’re doing, never used it, and didn’t test it…
It makes me wonder if this is the first paid application created by these developers. How much did the contractor receive for creating this awful solution? Was it awarded to the lowest price bidder? As a taxpayer, I hope we didn’t didn’t pay a premium for this quality because it needs to be rebuilt. And fixing, testing, and redeploying a live application like this is non-trivial. The managers who approved this system before it went live should be held accountable, along with the people who selected them.
Our Professional Solutions Group has created many mission-critical, custom software applications where scalability, reliability and quality are paramount. For instance, we built the Logistics Support System (LSS) for International Humanitarian Relief where lives are dependent on accurate, timely data on a global scale. I know what’s involved in creating great software, and this ain’t it. Healthcare.gov is simply an insurance quote system. As a software developer, I’m embarrassed for my profession. If FMS ever delivered such crap, I’d be personally inconsolable. This couldn’t pass an introductory computer science class.
In case you missed Erika’s post yesterday, tech experts told Reuters that the site has so many plug-ins, scripts, and otherwise inexplicable features (including an upload function!) streaming data between the user’s computer and the server that it’s inadvertently multiplying the load on its own system — not unlike a self-perpetrated DDOS attack, per one of Reuters’s sources. That is to say, the coding is so terrible that it’s actually compounding the scalability problem. So maybe HHS did properly estimate how much traffic there would be. They just didn’t anticipate that the site’s architecture would be such an unholy clusterfark that it would end up multiplying that traffic to unmanageable levels.
Remember, though: This is the easy part. Eventually, someone in the federal brain trust will figure out how to tweak the site so that it’s capable of processing basic vital information about the enrollee and creating an account for him/her, just like every privately-run commercial website in the world does every day. The hard part, which has already forced some of the state exchanges to partially suspend website operations for awhile, is accurately calculating the subsidies that each enrollee is entitled to under the law. If the coding is so poor that it can’t create accounts for people, how will it handle a higher-end function like that? Tech experts would love to know:
Several information technology experts predicted there would be even more technical problems as online systems confront premium payments, changes in eligibility and other complex tasks.
“Is this just the tip of the iceberg?” asked Harold Tuck, former chief information officer for San Diego County. “There ought to have been better beta testing of the systems, and these errors wouldn’t have come up.”
Bill Curtis, chief scientific officer at CAST, a New York-based firm that analyzes information technology systems, said the heavy traffic probably explained many of the problems. “When you have this kind of volume, it exposes all kinds of weaknesses,” he said.
The good news for O-Care fans is that they have tested the parts of the software responsible for calculating subsidies. The bad news is that, as of September 20th, this was the result of those tests. There’s an odd symmetry in all this. As Pelosi once famously noted, they didn’t find out what was in the bill until they passed it; evidently they won’t find out what the website is and isn’t capable of until hundreds of thousands of people are struggling with it. But that’s true to the program’s origins. O-Care began haphazardly, as an applause line in a speech, and after endless political, legal, and technological headaches, here we are.
Via the Free Beacon, here’s Chuck Todd declaring disaster. Exit question: Didn’t HHS promise on Friday that there would be “significant improvements in the online consumer experience” by Monday? Philip Klein tried again this morning and still couldn’t log on.