In her testimony yesterday, the Secretary for Health and Human Services, Sebilius, said that the ObamaCare system can process 17,000 users per hour, which translates to 284 users per minute.

First, that number is probably not quite accurate given the amount of complaints. My guess is that figure is potential, not actual. Some of that load is already being handled by state exchanges, which is likely where the 17,000 user number comes from. The Secretary was not asked to delineate which systems process which numbers of users. But it can still tell us something about where the problems lie.

Well, the busiest of my domain controllers process at least twice that. We have nearly 70,000 users sprinkled across the globe, with more than 50,000 of them in North America. And so, supposing that one quarter logon at approximately the same time very morning, across the four time zones, the potential that is planned for is 12,500 users to be processed simultaneously for DNS registrations, domain location, and authentication.

Of course, the ObamaCare IT system is far more complex and involves linkages across multiple databases in order to process information on the front-end market place in a timely fashion for what I would say is a requirement for at least 20,000 users simultaneously in order to process everyone who needs to be processed by December 15th from October 1st.

The first problem that comes to mind is data normalization and systems architecture, and there is probably a large amount of overhead occurring in on-the-fly data processing and linkage. Users are not pre-linked to their data, and there can be a problem with tables not having the same labels, or extra fields, indicating that there is too much reliance on data integration/translation techniques and too little on standardized data sets for the size of the infrastructure. The required data is not in the optimal place or in the right form when users need to be processed.

The proper way to do it would be to process the data integration before the front end needs to process a user. Ideally, the user should be pre-linked to his/her own data. In other words, 80% of potential users should already have been identified, their data integrated into a database for the front-end. It’s having data in the right place at the right time and in the right form 101.

This problem can be compounded by the wrong infrastructure for the architecture, supposing that some of it is virtualized (cloud). Because even multi-core processing hardware has only one data bus that is 64 bits wide, if any portion of the integration and translation systems are virtualized (or worse on the same virtual server) processing time can be hampered a great deal. I would not use virtual hardware for data processing that has a large user requirement and leverage all available threads for the task on a server by server basis. In other words, data processing should be run on a SQL cluster, not a virtual datacenter cluster which is different.

Advertisements