Cringly: Is cyber insurance AAA for data or another back door?

Posted by admin on August 14, 2013 in Computers, Programming, Technology |
Share Button

Comments from Cave Painter August 4, 2013 at 10:10 pm

…what about all the pieces of code in between the customer and my server? What about the operating system and support applications on the machine hosting my application? What about the embedded microcode in the chip that implements the higher level abstraction that is the CPU (made in Taiwan or China)? What about those things in all the machines that my application’s traffic traverses as it travels the internet? All of those things have the potential to expose my application or its data to exploitation one way or the other.

 

Ultimately it comes down to the question, “who do you trust?” Unless you can control the implementation of every piece of software, firmware and hardware your applications reside on, and their communications traverse through – you will not be able to certify your implementation is really secure. Since that is out of the question, how do you take the guesswork out of the decision of what software/products to trust versus what to run away from screaming?

What do you think about this as a possible solution: a certification ‘badge’ that says a particular software or hardware product is certified secure? The product in question would have to have all of its code open for review by independent auditors. It would also be stress tested in a live environment by anyone who cares to try to break the system, with awards for each exploit detected. Upon passing these tests over a given timeframe, it would be awarded the badge. At first, I don’t think many systems will pass the test. But as more do, momentum would grow as purchasers of these systems might get discounts on their insurance, if they use ‘badged’ products in their networks vs. non-badged, and so on…

I think this would drive developers to go back to their engineering roots and really approach systems development with the KISS principle (Keep It Simple Stupid) in mind. I think it would also drive developers to more formally break development capabilities into two areas: systems development vs. applications development – so that the security aspects (e.g. memory allocation/deallocation management – source of many buffer overflow exploits) are abstracted away from the application developers (why is it that applications developers have to reinvent the systems level security wheel every time they create a new app? Doesn’t make sense to me). Finally – due to the need for open access to code and test implementations – it would quickly define who to trust, and who are the snake oil salesmen in the pack….

 

From <http://www.cringely.com/2013/08/04/will-cyber-insurance-be-aaa-for-our-data-or-yet-another-back-door>

Copyright © 2013-2024 Hofman.org All rights reserved.
This site is using the Desk Mess Mirrored theme, v2.5, from BuyNowShop.com.