www. O S N E W S .com
News Features Interviews
BlogContact Editorials
.
The benefits and costs of writing a POSIX kernel in Go
By Thom Holwerda on 2018-10-08 23:51:05

This paper presents an evaluation of the use of a high-level language (HLL) with garbage collection to implement a monolithic POSIX-style kernel. The goal is to explore if it is reasonable to use an HLL instead of C for such kernels, by examining performance costs, implementation challenges, and programmability and safety benefits.

The paper contributes Biscuit, a kernel written in Go that implements enough of POSIX (virtual memory, mmap, TCP/IP sockets, a logging file system, poll, etc.) to execute significant applications. Biscuit makes liberal use of Go's HLL features (closures, channels, maps, interfaces, garbage collected heap allocation), which sub- jectively made programming easier. The most challenging puzzle was handling the possibility of running out of kernel heap memory; Biscuit benefited from the analyzability of Go source to address this challenge.

On a set of kernel-intensive benchmarks (including NGINX and Redis) the fraction of kernel CPU time Biscuit spends on HLL features (primarily garbage collection and thread stack expansion checks) ranges up to 13%. The longest single GC-related pause suffered by NGINX was 115 microseconds; the longest observed sum of GC delays to a complete NGINX client request was 600 microsec- onds. In experiments comparing nearly identical system call, page fault, and context switch code paths written in Go and C, the Go version was 5% to 15% slower.

Scientific papers about operating system experiments - who doesn't love them?

 Email a friend - Printer friendly - Related stories
.
Post a new comment
Read Comments: 1-10 -- 11-20 -- 21-30 -- 31-40 -- 41-50 -- 51-54
.
RE[12]: Comment by FlyingJester
By Alfman on 2018-10-11 21:51:19
kwan_e,

> Yes, but how many of them lead to actual security breaches. An exploit does not necessary lead to an actual incident. You can have exploits for every part of the kernel, but have they actually resulted in sensitive data being accessed?

The corollary to this argument is how many breaches are unreported or even undetected? Many companies would rather hide their vulnerabilities...

http://www.osnews.com/story/3077...
Permalink - Score: 2
.
RE[13]: Comment by FlyingJester
By kwan_e on 2018-10-11 22:20:39
> The corollary to this argument is how many breaches are unreported or even undetected? Many companies would rather hide their vulnerabilities...

http://www.osnews.com/story/3077...

What language is G+ written in? :) And what language would most programmers use to exploit that hole? Facebook's written in PHP.

Kernel bugs would require significant resources to actually be exploitable. On the level of state actors, like what happened (allegedly) with Stuxnet. So obviously, if these kind of breaches are common, then of course we won't hear about them much because they are state secrets on both sides.

On the other hand, if kernel bugs would lead to "consumer level" breaches like the above, and the others I mentioned, surely we would have heard a lot more by now, just by the sheer numbers of attack vectors and attackers. Otherwise you'd have to imagine some kind of conspiracy where companies only leak/report breaches based on the language used to write the system that was breached...

-----------------

So this is not to downplay the severity or commonality of memory access CVEs. For sure, we must treat potential security exploits almost as seriously as actual ones. But Linus Torvalds also does have a point about the security industry being attention whores. Every bug that leads to a security exploit is advertized as if they were the end of the computing world if they were not fixed yesterday. The reality, though, is that the proof-of-concepts to show the exploit in action requires significant hoops to jump through to work, compared to the ease of phishing and social engineering, or incompetence.

Edited 2018-10-11 22:26 UTC
Permalink - Score: 2
.
RE[14]: Comment by FlyingJester
By Alfman on 2018-10-12 03:28:45
kwan_e,

> What language is G+ written in? ;) And what language would most programmers use to exploit that hole? Facebook's written in PHP.

I don't know what language G+ is written, let me know. But why would a language matter in terms of public disclosure? I think the incentive to hide corporate vulnerabilities is just as applicable for C, java, php, or whatever.


> Kernel bugs would require significant resources to actually be exploitable.

Tell that to blackhat. Seriously, don't underestimate the lone wolf :)

> On the level of state actors, like what happened (allegedly) with Stuxnet. So obviously, if these kind of breaches are common, then of course we won't hear about them much because they are state secrets on both sides.

You've confounded me, haha. Three posts up you sort of dismiss the severity of 68% of kernel exploits because they didn't lead to a documented incident, yet here you are saying of course these kinds of breaches are common, but we won't hear about them because they are state secrets. You're kind of making my case for me aren't you?


> On the other hand, if kernel bugs would lead to "consumer level" breaches like the above, and the others I mentioned, surely we would have heard a lot more by now, just by the sheer numbers of attack vectors and attackers. Otherwise you'd have to imagine some kind of conspiracy where companies only leak/report breaches based on the language used to write the system that was breached...


We're both saying language doesn't matter in terms of disclosure.

> So this is not to downplay the severity or commonality of memory access CVEs. For sure, we must treat potential security exploits almost as seriously as actual ones. But Linus Torvalds also does have a point about the security industry being attention whores. Every bug that leads to a security exploit is advertized as if they were the end of the computing world if they were not fixed yesterday. The reality, though, is that the proof-of-concepts to show the exploit in action requires significant hoops to jump through to work, compared to the ease of phishing and social engineering, or incompetence.

I just don't see the point in calling the security industry attention whores for recognizing the widespread security problems with C. I suspect that everyone in this thread would all agree that phishing and social engineering are also problems, however in the context of an article about "the benefits and costs of writing a posix kernel in go", it makes a ton of sense to take a critical look at the computer languages themselves.
Permalink - Score: 2
.
RE[15]: Comment by FlyingJester
By kwan_e on 2018-10-12 06:18:58
> > Kernel bugs would require significant resources to actually be exploitable.

Tell that to blackhat. Seriously, don't underestimate the lone wolf :)


There are much more vulnerable and valuable targets. A lone wolf would find it much easier to achieve their aims via phishing or social engineering. The lone wolf these days seem to be script kiddies.

> > if these kind of breaches are common
yet here you are saying of course these kinds of breaches are common


I didn't say "of course these kinds of breaches are common". I said "IF these kind of breaches are common".

> We're both saying language doesn't matter in terms of disclosure.

You were saying that companies have an incentive to hide breaches. When we find out about breaches, that companies (and governments) have not successfully hidden from us, we don't see a proportional representation of them that are traced to memory access bugs in a kernel written in an unsafe language.

So I'm saying for your corollary argument to hold, you have to explain the lack of memory access bug related actual breaches via some sort of language biased conspiracy to withhold breaches from public knowledge. Which would be absurd. I gave Stuxnet as an example as a potential way to salvage your corollary, but in those cases we can't know, is the point. And actually, state actors have more incentive to hide breaches that relied on easy exploits.

> I just don't see the point in calling the security industry attention whores for recognizing the widespread security problems with C. I suspect that everyone in this thread would all agree that phishing and social engineering are also problems, however in the context of an article about "the benefits and costs of writing a posix kernel in go", it makes a ton of sense to take a critical look at the computer languages themselves.

But Linus' point was that's not what they're doing. They over-dramatize a lot of kernel exploits that would have required a sufficiently compromised machine to begin with, blowing everything out of proportion. That's what makes them, in his view, attention whores. The "widespread security problems with C" is not as widespread as is claimed, given my argument above about how exploits have rarely been converted into actual incidents.
Permalink - Score: 2
.
RE[16]: Comment by FlyingJester
By moondevil on 2018-10-12 06:58:17
> The "widespread security problems with C" is not as widespread as is claimed, given my argument above about how exploits have rarely been converted into actual incidents.

Actually they were, many of those exploits were used while attacking Android devices, as those statistics were shown by Google at their Summit talks.

Which is why Google is driving the Kernel Self Preservation Project.

Using C is like using butcher knifes with chain gloves, driving without car belts or motorbike helmets.

Some people have luck all their lives, others unfortunately not.

Edited 2018-10-12 06:59 UTC
Permalink - Score: 2
.
RE[16]: Comment by FlyingJester
By Alfman on 2018-10-12 08:23:32
kwan_e,

> There are much more vulnerable and valuable targets. A lone wolf would find it much easier to achieve their aims via phishing or social engineering. The lone wolf these days seem to be script kiddies.

Exploits are quite valuable. Look at it in terms of supply and demand: the harder it is, the more valuable it is on the market.

https://www.wired.com/2015/04/the...


As software engineers, I think we have a responsibility to make sure computer systems are as safe as we can reasonably make them. So I'm not really comfortable with using the existence of other kinds of social attacks as an excuse for not doing our work better.


> You were saying that companies have an incentive to hide breaches. When we find out about breaches, that companies (and governments) have not successfully hidden from us, we don't see a proportional representation of them that are traced to memory access bugs in a kernel written in an unsafe language.

I'm not sure what your basis is for saying this, but some of the most damaging attacks are exactly that. Wannacry, which is said to have reached many billions in damages, was caused by a memory access bug that wouldn't have occurred in a safe programming language.

https://www.rapid7.com/db/modules...


> But Linus' point was that's not what they're doing. They over-dramatize a lot of kernel exploits that would have required a sufficiently compromised machine to begin with, blowing everything out of proportion.

Well he's often been overly defensive and dramatic himself. However I'd need to see exactly what Linus said, otherwise I risk taking "his" point out of context. If he did say kernel exploits don't matter because they require a sufficiently compromised machine to begin with, it's still just an excuse in my book since privilege escalation attacks can escalate a modest attack into a highly damaging one.



If a different programming paradigm can exterminate one class of bugs entirely, that's an objective 'pro'. Of course there are usually pros and cons to consider, performance often being cited as the con for managed languages. Honestly I really do enjoy low level coding, I always have, but after having witnessed the same critical flaws for decades I know that if we keep doing things the same way, we'll be destined to repeat these flaws over and over again. This is why I welcome research that attempts to make code safer while finding ways to minimize the cons.
Permalink - Score: 2
.
RE[17]: Comment by FlyingJester
By kwan_e on 2018-10-12 15:08:48
> Exploits are quite valuable. Look at it in terms of supply and demand: the
harder it is, the more valuable it is on the market.

https://www.wired.com/2015/04/the...

That's exploits in general, and some of the examples given sound more like higher level attacks than kernel level. Not to mention the very real possibility they are fakes, as the article touched on.

> As software engineers, I think we have a responsibility to make sure computer systems are as safe as we can reasonably make them. So I'm not really comfortable with using the existence of other kinds of social attacks as an excuse for not doing our work better.

I don't argue against this anywhere. In fact, I think your criteria of "reasonable" and "work better", would suggest we spend a lot more effort on guarding against social attacks, since that is where most of the damage is actually being done, and where the fixes are ostensibly much easier to track down and fix.

> I'm not sure what your basis is for saying this, but some of the most damaging attacks are exactly that. Wannacry, which is said to have reached many billions in damages, was caused by a memory access bug that wouldn't have occurred in a safe programming language.

https://www.rapid7.com/db/modules...

I specifically said kernel memory access bugs, since that's where the discussion started. Those links all talk about SMB servers, which as I said a few comments ago, is one of those applications I'd want written in high level languages, because it is a performance-non-crucial network facing application.

> If he did say kernel exploits don't matter

He didn't say that, and neither am I saying that, or saying he said that. It's about proportion. In one of his mailing list rants, he was talking about how security people like the solution for everything to be "if this process violates some security, we must kill it." The fact is the kernel sees a lot of violations that are the result of harmless bugs in userspace programs. It does no one any favours when any buggy program is killed outright, giving users the impression that the system is unstable.

It's this kind of response that is out of proportion, in his view.
Permalink - Score: 2
.
RE[18]: Comment by FlyingJester
By Alfman on 2018-10-12 18:04:10
kwan_e,

> That's exploits in general, and some of the examples given sound more like higher level attacks than kernel level. Not to mention the very real possibility they are fakes, as the article touched on.

I don't argue against this anywhere. In fact, I think your criteria of "reasonable" and "work better", would suggest we spend a lot more effort on guarding against social attacks, since that is where most of the damage is actually being done, and where the fixes are ostensibly much easier to track down and fix.


But so what? This article is about solving kernel vulnerabilities. We all know there are other kinds of vulnerabilities, those are important too, but those are not the topic of this article.

If other vulnerabilities are higher priorities for you, then fine, but these low level memory corruption bugs have been a very long running stain on our industry and personally I think it's time we stop sweeping it under the rug.



> I specifically said kernel memory access bugs, since that's where the discussion started. Those links all talk about SMB servers, which as I said a few comments ago, is one of those applications I'd want written in high level languages, because it is a performance-non-crucial network facing application.


You may not have intended to, but in a roundabout way you've just admitted that parts of the kernel ought to be written in high level languages to make it safer. The wannacry ransomware is based on the NSA's eternalblue kernel exploit. The exploit hinges on a memory overflow bug in the way a srv.sys kernel driver allocates memory.

https://blog.trendmicro.com/trend...
https://research.checkpoint.com/e...
https://www.rapid7.com/db/modules...


> He didn't say that, and neither am I saying that, or saying he said that. It's about proportion. In one of his mailing list rants, he was talking about how security people like the solution for everything to be "if this process violates some security, we must kill it." The fact is the kernel sees a lot of violations that are the result of harmless bugs in userspace programs. It does no one any favours when any buggy program is killed outright, giving users the impression that the system is unstable.

It's this kind of response that is out of proportion, in his view.


He's entitled to his opinions too, but if you'd like to discuss something specific that linus has said, it'd be better to have a link.
Permalink - Score: 2
.
RE[19]: Comment by FlyingJester
By kwan_e on 2018-10-12 19:40:26
> If other vulnerabilities are higher priorities for you, then fine, but these low level memory corruption bugs have been a very long running stain on our industry and personally I think it's time we stop sweeping it under the rug.

You seem to keep missing the point I'm making. I don't prioritize social attacks more than others. I'm saying, by their actual measured effect, they are clearly the most costly to the largest number of people, and they are the easiest to find and fix (on a technical level). Those are just the facts.

Should we be stamping out memory bugs with language/compiler help? Yes. But writing a kernel in Go is like sweeping it under the rug, because sweeping things under the rug is literally garbage collection :) Let's not produce garbage in the first place, as Bjarne himself says.

> You may not have intended to, but in a roundabout way you've just admitted that parts of the kernel ought to be written in high level languages to make it safer.

If you consider C++ a high level language, sure. You can even have deterministic garbage collection too, with C++. And while recent discussions may not seem like it, I have said that I do prefer kernels be smaller in what they do, if not going full blown microkernel.

> The wannacry ransomware is based on the NSA's eternalblue kernel exploit. The exploit hinges on a memory overflow bug in the way a srv.sys kernel driver allocates memory.

First of all, kind of proves my point that to really take advantage of a kernel level exploit you literally needed a state actor to provide the initial work. The NSA is not a lone wolf, and the bug would probably not have been so easily been taken advantage of if it didn't leak out of the NSA via a much easier route than hacking the NSA through a kernel level exploit.

Second, it speaks to my point about how rare actual cases are. All these memory access CVEs, and we only have two or three that caused as much damage as they were hyped to be.

> He's entitled to his opinions too, but if you'd like to discuss something specific that linus has said, it'd be better to have a link.

First page on Google:
https://www.theregister.co.uk/201...
Permalink - Score: 2
.
RE[20]: Comment by FlyingJester
By Alfman on 2018-10-12 22:30:17
kwan_e,

> You seem to keep missing the point I'm making. I don't prioritize social attacks more than others. I'm saying, by their actual measured effect, they are clearly the most costly to the largest number of people, and they are the easiest to find and fix (on a technical level). Those are just the facts.

I don't see where you provide any supporting evidence for your facts, but that's not even the point. Social attacks are a completely different animal. You want to focus on these different classes of problems, great! I fully encourage you to do so and even submit articles about it to osnews. We can discuss those too! But in the context of this article about fixing kernel memory errors, I really don't see any value in criticizing us for looking for solutions to these kernel problems. Not everything has to be mutually exclusive, you know.



> Should we be stamping out memory bugs with language/compiler help? Yes. But writing a kernel in Go is like sweeping it under the rug, because sweeping things under the rug is literally garbage collection ;) Let's not produce garbage in the first place, as Bjarne himself says.

All right, those are the things that legitimately need to be debated. Just think, we could have been exchanging tons of ideas instead of arguing over whether it's even worth solving. I feel this lowers the quality of discussion, don't you?

> If you consider C++ a high level language, sure. You can even have deterministic garbage collection too, with C++. And while recent discussions may not seem like it, I have said that I do prefer kernels be smaller in what they do, if not going full blown microkernel.

C++ straggles between safe and unsafe. C++ memory abstractions can be safe, but C++ doesn't enforce/verify it. I agree C++ is worth having on the table because it's the original successor to C, but I think we can do better today. Many of us who use C++ would do just as well to use a more modern variant like D. It gives you a C flavor + OOP, but reduces a lot of the ugliness that evolved with C & C++. Of course going into that is a huge discussion unto itself, so I don't intend to do it now. Criticizing languages is a hot bottom for many people, haha. I don't know if it can be avoided.

> First of all, kind of proves my point that to really take advantage of a kernel level exploit you literally needed a state actor to provide the initial work. The NSA is not a lone wolf, and the bug would probably not have been so easily been taken advantage of if it didn't leak out of the NSA via a much easier route than hacking the NSA through a kernel level exploit.

Hacking kernels doesn't require as many resources as you think it does. Realistically high school students can and do teach themselves how to do it. There's no denying the NSA drastically increases the number of exploits it has by hiring so many people, but what those people do is certainly doable by a lone wolf.

In any case, I maintain that our operating systems should be designed to higher safety standards regardless of who might want to break in.

> First page on Google:
https://www.theregister.co.uk/201...

It's just his usual expletive self, cool. I don't see what it adds here exactly.

Edited 2018-10-12 22:33 UTC
Permalink - Score: 2

Read Comments 1-10 -- 11-20 -- 21-30 -- 31-40 -- 41-50 -- 51-54

There are 1 comment(s) below your current score threshold.

Post a new comment
Username

Password

Title

Your comment

If you do not have an account, please use a desktop browser to create one.
LEAVE SPACES around URLs to autoparse. No more than 8,000 characters are allowed. The only HTML/UBB tags allowed are bold & italics.
Submission of a comment on OSNews implies that you have acknowledged and fully agreed with THESE TERMS.
.
News Features Interviews
BlogContact Editorials
.
WAP site - RSS feed
© OSNews LLC 1997-2007. All Rights Reserved.
The readers' comments are owned and a responsibility of whoever posted them.
Prefer the desktop version of OSNews?