What is the recommended course of action? Stop buying Intel products, and devices which contain them?
What about devices with older processors? I'm still running a Sandy Bridge rig and it works fine, except for the side channel vulnerablities. It's probably not going to be patched. I also have a cheaper computer with a Skylake processor, which is newer yet still vulnerable!
It's only a matter of time until something really nasty comes along, making all these PCs dangerous to use. What then? Lawsuits?
My questions are only partially rhetorical.
In short:
* Core and Xeon CPUs affected, others apparently not.
* HT on or off, any kind of virtualization, and even SGX are penetrable.
* Not OS-specific, apparently.
* Sample code provided.
https://www.cyberus-technology.de/posts/2019-05-14-zombieloa...
Pandora's box was opened with the public disclosure of Spectre and Meltdown. Security researchers will continue to find new and better ways of attacking the security boundaries in processors, and there's unlikely to be an end to this any time soon. Exciting time to be in security, not such an exciting time to be a potential victim.
> macOS performance: Testing conducted by Apple in May 2019 showed as much as a 40% reduction in performance with tests that include multithreaded workloads and public benchmarks. Performance tests are conducted using specific Mac computers. Actual results will vary based on model, configuration, usage, and other factors.
from here: https://support.apple.com/en-us/HT210107
So at what point do we start producing CPUs specifically aimed at running a kernel/userland? Why don't we have a CPU architecture where a master core is dedicated to running the kernel and a bunch of other cores run userland programs? I am genuinely curious. I understand that x86 is now the dominant platform in cloud computing. But it's not like virtualization needs to be infinitely nested, right? Why not have the host platform run a single CPU to manage virtual machines, which each get their own core or 20? Would the virtual machines care that they don't have access to all the hardware, just most of it?
9% hit potentially on performance in data center. Add in all the Spectre and meltdown mitigations and we have potentially lost nearly two generations of Intel performance increases.
Just shows the hoops and tricks needed to keep making, on paper, faster processors year on year but without node shrinks to give headroom.
14nm++++ is played out.
Some information for Linux, from LWN.net (https://lwn.net/Articles/788381/): "See this page from the kernel documentation (https://www.kernel.org/doc/html/latest/x86/mds.html#mds) for a fairly detailed description of the problem, and this page (https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/m...) for mitigation information."
It takes one rouge/unpatched VM to run and scan threads randomly, undetected over a longer period of time, if not patched. With HT disabled potential hits become less likely, but still possible given time. Is virtualization on Intel dead now? Perhaps not. But, it's increasingly dangerous to use Intel for cloud services.
What impact does this have in a multi-tenant cloud environment? I'm legitimately considering moving my security critical EC2 instances over to AMD-backed instance types right now.
So I'd love to post an Ask HN: Which AMD Laptops would you recommend for work, alternatives to Thinkpads?
I've noticed some Thinkpads with AMD CPUs but I feel like I'm on virgin ground when it comes to AMD and their integrated GPU offerings.
Looks like AMD Cpus are safe again.
OpenBSD was right and disabled HT for Intel CPUs in June 2018 ago due to concerns of more such CPU bugs coming up. There we go ... https://news.ycombinator.com/item?id=17350278
Why doesn't this type of news cause INTC to tank - they're up today. I know the market is up today, but (and it's probably my innate overreaction) I would think this sort of news would cause its stock to suffer.
Do cloud providers commonly float cores between VMs? I could see instances like the AWS T family (burstable) sharing, but I had always assumed that most instance types don't over-provision CPU.
If that's the case, my CPUs are likely pinned to my VM. I could still have evil userland apps spying on my own VM, but I would not expect this to allow other VMs to spy on mine.
I really hate these descriptions of SMT as some kind of violation of the natural relationship between CPU frontend and backend. The idea that there is a “physical core” and a “logical core” does not map to reality.
I'm sure I remember a post on here (or possibly /r/programming) a couple of years ago from an Intel employee mentioning that Intel was cutting a lot of QA staff, and that we should expect more bugs in the future. I could be imagining things though.
This sentence killed me: "Daniel Gruss, one of the researchers who discovered the latest round of chip flaws, said it works “just like” it PCs and can read data off the processor. That’s potentially a major problem in cloud environments where different customers’ virtual machines run on the same server hardware."
What are they saying here?
Can this attack allow the attacker to escape public cloud isolation methods and break into the control plane or other VMs?
These style of exploits remind me of "The Free Lunch Is Over: A Fundamental Turn Toward Concurrency in Software" (2005) - http://www.gotw.ca/publications/concurrency-ddj.htm
> Chip designers are under so much pressure to deliver ever-faster CPUs that they’ll risk changing the meaning of your program, and possibly break it, in order to make it run faster.
> ...
> applications will increasingly need to be concurrent if they want to fully exploit CPU throughput gains that have now started becoming available and will continue to materialize over the next several years. For example, Intel is talking about someday producing 100-core chips; a single-threaded application can exploit at most 1/100 of such a chip’s potential throughput.
It seems the trend in programming languages is towards better concurrency support. But why don't we yet see 100-core chips? If chip makers had to forego all speculative execution and similar tricks, would that push us toward the many-core future?
crucial (for me anyway) summary of relevant events of the day
https://twitter.com/IanColdwater/status/1128395135702585347?...
I just want to plug their course hardware security (at the VU University Amsterdam). It's an amazing course and it costs 1200 euro's for students who need to pay full price. I've learned a lot about Spectre, Meltdown and novel forms of cache attacks and Rowhammer when I took it.
Is there any clear source of info for sysadmins responding to the many CPU-level vulns in the past year? It's very difficult to keep track of whether fixes are needed at ucode, OS, and/or application level, and what version numbers fix each bug.
According to their blog post[1], there is little you can do against this. Running different applications on different cpus help against them reading each other’s data but an rogue process can still read data from the “super ordinated kernel” or hypervisor.
Another non-issue on non-Intel CPUs, like SPARC. Lovely.
So far there seem to be far more of these vulnerabilities in Intel CPUs.
Is that a reflection of engineering differences or a statistical byproduct of the market share of Intel CPUs?
I run AMD not because of the security implications but because I feel every dollar that goes to Intel competition will push Intel and thus the entire industry forward.
If using a cloud provider with Intel processors:
> The safest workaround to prevent this extremely powerful attack is running trusted and untrusted applications on different physical machines.
Nope!
> If this is not feasible in given contexts, disabling Hyperthreading completely represents the safest mitigation.
Nope!
Shrugs?
The best defense against all these CPU vulns is to stop running malicious code. And that means getting off of shared VMs (and similar) where someone could run malicious code in your stead. Stop running any script your browser gets handed. Isolation was always a great idea, poor man's isolation (VMs, processes, ...) is only useful for isolation against non-malicios accidental interference. You want physical isolation between applications and services.
An unprivileged attacker with the ability to execute code
That sounds like a contradiction --- if you can already execute code, I'd say you're quite privileged. It's unfortunate that their demo doesn't itself run in the browser using JS (I don't know if it's possible), because that's closer to what people might think of as "unprivileged".
The attacker has no control over the address from which data is leaked, therefore it is necessary to know when the victim application handles the interesting data.
This is a very important point that all the Spectre/Meltdown-originated side-channels have in common, so I think it deserves more attention: there's a huge difference between being able to read some random data (theoretically, a leak) and it being actionable (practically, to exploit it); of course as mentioned in the article there are certain data which has patterns, but things like encryption keys tend to be pretty much random --- and then there's the question of what exactly that key is protecting. Let's say you did manage to correctly read a whole TLS session key --- what are you going to do with it? How are you going to get access to the network traffic it's protecting? You have just as much chance that this same exploit will leak the bytes of that before it's encrypted, so the ability to do something "attackful" is still rather limited.
Even the data which has patterns, like the mentioned credit card numbers, still needs some other associated data (cardholder name, PIN, etc.) in order to actually be usable.
The unpredictability of what you get, and the speed at which you can read (the demo shows 31 seconds to read 12 bytes), IMHO leads to a situation where getting all the pieces to line up just right for one specific victim is a huge effort, and because it's timing-based, any small change in the environment could easily "shift the sand" and result in reading something entirely different from what you had planned with all the careful setup you did.
Using ZombieLoad as a covert channel, two VMs could communicate with each other even in scenarios where they are configured in a way that forbids direct interaction between them.
IMHO that example is stretching things a bit, because it's already possible to "signal" between VMs by using indicators as crude as CPU or disk usage --- all one VM has to do to "write" is "pulse" the CPU or disk usage in whatever pattern it wants, modulating it with the data it wants to send, and the other one can "read" just by timing how long operations take. Anyone who has ever experienced things like "this machine is more responsive now, I guess the build I was doing in the background is finished" has seen this simple side-channel in action.
These CPU flaws make it seem as if virtualization in the data center is becoming really, really dangerous. If these exploits continue to appear, the only way forward would be dedicated machines for each application of each customer. Essentially, this might be killing the cloud by 1000 papercuts because it loses efficiency and cost effectiveness and locally hosted hardware does not necessarily have to have all mitigations applied (no potential of a unknown 3rd party code deployed to the same server).
Sorry for being naive. Are these kind of CPU Securities vulnerabilities new? Why it is in the past 20 years we have had close to zero in the news ( At least I wasn't aware of any ) and ever since Spectre and Meltdown we have something new like every few months.
And as far as I am aware they are mostly Intel CPU only. Why? And Why not AMD? Something in the Intel design process went wrong? And yet all the Cloud Vendor are still buying Intel and giving very little business to AMD.
This looks like it is from the same TU Graz people who also worked on Meltdown & Spectre
Url changed from https://zombieloadattack.com, which points to this.
There is a home page about today's vulnerability disclosures at https://news.ycombinator.com/item?id=19911715. We're disentangling these threads so discussion can focus on what's specific about the two major discoveries. At least I think there are two.
Hooray, yet another vulnerability caused by the speculative hacks Intel implemented instead of investing in research and development.
We have had the same basic architecture since 2011's Sandy Bridge. That's 8 years of die shrinks and speculation hacks.
This is what happens when you have only 2 players in a major industry and one of them slips away from parity. AMD failed to compete with Bulldozer, so Intel hasn't had to innovate for nearly a decade.
As a result, Intel cpus are now all slower than they were on release and Intel may have to disable hyperthreading for their patches.
This is utterly pathetic. Between this, the other exploits, the shameless price gouging and removal of features, the utter failure of their 10nm process, and the appeal of Ryzen, I think Intel is screwed.
They've been selling us the same crap for 8 straight years. Moore's law isn't dead, Intel just tossed it aside when they realized they didn't have to lift a finger to keep raking in cash.
Can't wait for zen2 and to say bye bye to Intel for a long time.
At what point do we simply revert to using typewriters for authoring sensitive documents, and pneumatic tubes (couriers for WAN) for networking?
https://www.theguardian.com/world/2014/jul/15/germany-typewr...
It's about time to realize that ancient Chinese were Đľnto something when they told that all phenomena shall evolve only so much before they tip over the peak of maximum development and inevitably rumble downhill into overdevelopment.
People should realize that ancient Chinese were onto something when they told that all phenomena shall evolve only so much before they tip over the peak of maximum development and inevitably rumble downhill into overdevelopment.
P.S. Wow, hit a soft spot. Flagging this for what? For being unloyal to the ideology of everlasting growth? Try again as much as you can.
People should realize that ancient Chinese were Đľnto something when they told that all phenomena shall evolve only so much before they tip over the peak of maximum development and inevitably rumble downhill into overdevelopment.
P.S. the Holy Church of Progress keeps flagging the herecy of I-Ching out of existence, may it prevail in its glorious ways. Curious fact: expressing your disagreement in written form takes more neurons than flagging reflex does. Try and ye shall succeed!
Apparently Intel attempted to play down the issue by trying to award the researchers with the 40,000 dollar tier reward and a separate 80,000 dollar reward as a "gift" (which the researchers kindly denied) instead of the maximum 100,000 reward for finding a critical vulnerability.
Intel was also planning to wait for at least another 6 months before bringing this to light if it wasn't for the researchers threatening to release the details in May.
Source in the dutch interview: https://www.nrc.nl/nieuws/2019/05/14/hackers-mikken-op-het-i...