Just like proprietary software, Open Source has plenty of plus and minus. Critics of open source software often report that its broad base of development and open source code are dangerous to security. But this assessment is not fair, according to Dr Ian Levy, CESG Technical Director, a part of the United Kingdom's GCHQ, which advises the UK government on IT security.
Open source is no worse or better than proprietary software when it comes to security, according to Levy, who dismissed some open source security myths and spoke in detail about the real security challenges at the Open Source Open Standards conference. previously held in London.
Yesterday we mentioned myths about Open Source.
Today we will talk about the challenges:
Software distribution
Online distribution methods used by many open source projects are vulnerable as genuine executables are being replaced with counterfeit products containing malicious code, said Dr Ian Levy at the same conference.
"How do I get confirmation for the online distribution, because a SHA-1 hash and a PGP key are on the same server, and the distribution itself does not do it for me. Distribution has already taken place against distribution servers. No one touched the source code, but they touched the binary, MD5, SHA-1 and PGP code.
"You have downloaded the hash control, but you have the hash from the same position as the binary. Where is the trust? ”
Patching or repairs
The same question that applies to the source of the code also applies to open source software updates.
"If I use Windows Update, I know it's signed by a process that works within Microsoft. What can I know about Mint updates?
"What do I know about software coming from a 'secure' HTTP Server?"
Exploit visibility or Visible Exploits
"For Open source patches you have to release the source code. So malicious users can inherently reveal the underlying problem. A binary patch released to fix a security vulnerability in a product can also be used to reverse its functionality. In open source, the patch source code is available and shows the attacker exactly what the problem is.
"This is not necessarily a bad thing, by the logic of continuous repair. Continuous repairs reduce the operating time of exploits. ”
Thus Open Source projects have individuals and groups that track active bugs in the code, which makes these potentially unpatched errors even more visible.
"Since they are open to everyone you can have zero day exploits because there is no patch."
Control of the supply chain of the code
"How do I know who wrote the code I use, how do I know what wrote it, and how do I know what else is there?" said Levy.
Commercial software modules can be checked by a legal team to determine that they comply with the license.
"How can I have the same reliability as free software? What can I say about its legality? How do I know if someone has reviewed the licenses of these software modules?
"I'm not saying you can't do it, I'm saying how to do it? It's a different set of challenges. "
Breaking up the Group
"Personality change can have a much greater impact on an open source product than it can on a commercial product. A commercial product has a brand value, while an open source product is driven by a group of people. I would like to hope that they are all aligned in some general terms but there have been and are ghettos in open source projects that are radically changing direction. ”
Relations of Developers
To be able to assess software security depends heavily on how much you know the developers and if you have some data on their future software plans, according to Levy.
"Security assessment is more relevant to developers than to source code," he said.
"Anyone who thinks that a security assessment requires checking every line of code to look for vulnerabilities is completely wrong. The development at the level we are talking about is that the developer knows what the others are doing, that there is a long-term plan for maintaining security and that there is an incident management plan.
"Designing the product architecture from code is incredibly difficult. If you have nothing to do with the developer ask: Why do you design it that way? "Continuing to grow is really difficult and often requires third parties."
Identities of Developer are generally non-existent
"For some projects the developer ID is a Gmail address. Who wants to bet their security on a Gmail account? There are other ways to authenticate developers, but these are things we need to think about. ”
Lack of development of standardized and common security infrastructures
"I can check a company and say, 'Do you have these standards and apply them and yes you have incidents, but do you manage them well?'
How can we do this for a different set of developers on their own hardware?