This article is currently an experimental machine translation and may contain errors. If anything is unclear, please refer to the original Chinese version. I am continuously working to improve the translation.
Background
I recently got access to a powerful EPYC 9654 server, on which I happily ran various data analysis tasks—even LLMs. But recently, when I wanted to process some private data on it, I hit a wall…
I’m not the only user on this machine. The permission setup is a textbook example of chaotic shared access—everyone shares the same account password, all users are in the sudo group, the root password circulates freely across WeChat/QQ groups, and the server’s SSH is exposed to the public internet via internal network tunneling. Even the IPMI shares the same weak username and password with the OS.
This environment is about as “secure” as a cardboard box. Any user or external attacker can easily read my data from disk or memory. And yet, such servers are widespread inside university intranets—everyone just wants convenience, no one cares about security. If I harden the server, it’ll only inconvenience others.
So I started exploring how to securely compute my data in this chaotic environment. After a quick search, I discovered AMD Secure Encrypted Virtualization (SEV).
Oh, this is confidential computing! Major cloud providers like Azure, AWS, and GCP all support it. It seems like a reasonably mature technology—shouldn’t be too hard to set up on my own server, right? (flag planted)
While trusted computing has a bad reputation in consumer markets—often used for vendor lock-in or DRM, enabling anti-user features that restrict freedom and choice by monopolizing the definition of “security” and “trust” for pure profit… (I’m looking at you, Google and Play Integrity)
At least on mainstream desktop x86 platforms, Secure Boot remains opt-outable, and memory encryption is still a privilege of server CPUs. Aside from needing to trust and rely on vendor-specific features like SEV, it’s overall a solid security addition. Let’s give it a try.
AMD’s SEV comes in several flavors:
- SEV: Secure Encrypted Virtualization. Encrypts VM memory.
- SEV-ES: Secure Encrypted Virtualization - Encrypted State. Encrypts VM memory and CPU register state.
- SEV-SNP: Secure Encrypted Virtualization - Secure Nested Paging. Adds memory mapping encryption and integrity on top of the previous two.
SEV and SEV-ES require 2nd-gen EPYC or newer; SEV-SNP requires 3rd-gen EPYC or newer. Since the EPYC 9004 series supports it, we’ll go straight for the latest: SEV-SNP.
Setting Up the Environment
Documentation on SEV-SNP across the internet is sparse. I only found one detailed guide for SEV-ES (I tried using the parameters from that post and the QEMU Wiki, but failed to boot). So I’m documenting the SEV-SNP setup process here for reference.
Host BIOS
First, we need to modify the host BIOS. My server uses an H13SSL-N motherboard, and SEV-related settings are disabled by default. Simply reboot, connect via IPMI, and change the following in the web-based BIOS interface:
Enable SMEE (memory encryption), enable SEV Control, set SEV-ES ASID Space Limit to a value > 1 to enable SEV-ES, and enable RMP Table.
BIOS CPU Settings Page
Enable SEV-SNP Support
BIOS Southbridge Settings Page
Host Linux Configuration
With CPU and BIOS support in place, we now need support from the hypervisor (KVM), QEMU, and the OVMF firmware used by QEMU.
SEV-SNP is a relatively new technology. Only a few days ago did the latest Ubuntu 25.04 add host-side support. My Ubuntu 22.04 LTS is clearly out of luck for such updates, so I have to compile the required tools myself.
Luckily, there’s a kind soul on GitHub who’s already compiled the necessary scripts. We can just pull and use them. (Ideally, you should build these tools in a trusted environment—especially the OVMF firmware below—but I’m cutting corners here.)
Ubuntu 22.04 had some missing dependencies during build, like nasm, iasl, debhelper, etc. Just install them manually based on error messages.
1 | git clone https://github.com/AMDESE/AMDSEV.git |
After successful build, install the host kernel via apt and reboot, selecting the new kernel in GRUB.
1 | cd linux |
After the host reboots, you should see SEV-SNP loading logs in dmesg.
1 | test@epyc:~$ uname -a |
Launching the Guest
Prepare a Linux guest disk image in qcow2 format and transfer it to the host. I chose Debian 13 here—newer distros already include SEV-SNP guest support.
1 | # Launch the VM |
Connect to the running VM via SSH or other methods. In dmesg, you’ll see SEV-SNP has started in the guest.
1 | root@sevsnp:~# dmesg | grep SEV |
Successfully launched SEV-SNP protected guest VM
Performing Guest Attestation
It looks like we’ve successfully launched an SEV-SNP protected VM. The VM’s memory and register state are now protected by the AMD Secure Processor and cannot be read or tampered with by the untrusted hypervisor (in this case, our EPYC 9654 server).
But consider this scenario: suppose a sophisticated attacker, when I transferred the Debian13 qcow2 image to the EPYC 9654, immediately tampered with and replaced the kernel so that it fakes SEV-SNP protection—printing SEV-SNP logs even though no protection is active—and I end up believing it’s secure.
In other words, we currently can’t prove that the VM is actually protected. To further confirm that the VM is under SEV-SNP protection and running trusted software, we need to perform remote Attestation.
Preparing the measurement
Let’s move to a trusted device—like my HomeLab—for preparation.
1 | # Install tools for generating measurement |
Launching the Guest
Copy the generated OVMF.fd, vmlinuz, and myinitrd.cpio.gz files to the EPYC 9654.
1 | # Need to edit launch-qemu.sh |
From the logs, the actual executed command is:
1 | /mnt/ssd/qemu/AMDSEV/usr/local/bin/qemu-system-x86_64 -enable-kvm -cpu EPYC-v4 -machine q35 -netdev user,id=vmnic,hostfwd=tcp::8000-:22 -device virtio-net-pci,disable-legacy=on,iommu_platform=true,netdev=vmnic,romfile= -vnc :1 -device virtio-vga -smp 16,maxcpus=255 -m 16384M,slots=5,maxmem=24576M -no-reboot -bios /mnt/ssd/qemu/OVMF.fd -machine confidential-guest-support=sev0,vmport=off -object memory-backend-memfd,id=ram1,size=16384M,share=true,prealloc=false -machine memory-backend=ram1 -object sev-snp-guest,id=sev0,policy=0x30000,cbitpos=51,reduced-phys-bits=1,kernel-hashes=on -kernel ../vmlinuz -append "console=ttyS0" -initrd ../myinitrd.cpio.gz -nographic -monitor pty -monitor unix:monitor,server,nowait |
After successful boot, since no disk is mounted, initramfs fails to proceed and stops at the busybox shell.
VM booted into initramfs
Performing Attestation and Verifying Results
Back on the trusted machine, generate a nonce (a common practice in modern security to prevent replay attacks):
1 | openssl rand -hex 64 > nonce.hex |
On the VM, write the nonce to a file and generate an attestation report:
1 | echo '3502ae46269024fda5eea969d942f15b9cf9708602709d97b0de1ceaa0b712c7a6ea5170310fd6f10e9f1ad223cb4ffbc8fd036a70846cb7d50f3086c64c2da0' > nonce.hex |
Transfer report.bin back to the trusted HomeLab machine and verify:
1 | # Download AMD's CA certificates for Genoa (EPYC 9004 series) (ark.pem + ask.pem) |
We now have a cryptographically verified attestation report signed by AMD. Use snpguest display report report.bin to inspect its content.
1 | Attestation Report: |
You can see our generated nonce in the Report Data field, and the Measurement value matches the one we computed locally.
Now, as long as we trust AMD, we have cryptographically confirmed that: this VM is indeed running on a real, trusted AMD system; its memory confidentiality and integrity are protected by AMD SEV-SNP; and it’s running the exact software image (OVMF, kernel, initramfs, cmdline) we prepared in a trusted environment, unaltered.
Injecting Keys and Completing Boot
After attestation, we now have good reason to believe we’ve successfully launched a trusted VM on the shady EPYC 9654 server. The hardest part is done. Now it should be straightforward to boot a full Linux distro and start computing… right?
Think again.
Let’s reconsider: we’ve only booted a specific kernel and initramfs. To run a complete Linux system, we still need a trusted rootfs. Even if you’re bold enough to pack the entire environment into initramfs, our ultimate goal is to bring in confidential data into the VM’s memory for computation.
Packing secrets, keys, or private data directly into initramfs won’t work—it’s plaintext and not encrypted. The same goes for any credentials. Transmitting keys or secrets over network or terminal is also unsafe, as the communication can be easily intercepted by the hypervisor (the middleman). You might think of SSH, but SSH security relies on the host key not being compromised—except the hypervisor already has access to the SSH host key stored in initramfs.
The core issue is this: the hypervisor currently has the same information as our encrypted VM. Even though we know such a VM exists, we cannot distinguish between the VM and the hypervisor when communication is involved.
In older SEV-ES, we could generate encrypted secrets that AMD’s CPU would decrypt and pass directly to the VM—simple and effective. But in the newer SEV-SNP, this seems replaced by a much more complex SVSM module. (See related discussion)
This part stumped me for a while. Guess my modern cryptography knowledge isn’t quite up to par. But actually, by cleverly using the attestation process we just completed, we can now distinguish the encrypted VM from the hypervisor.
The idea is:
- In a trusted environment, generate a nonce.
- Provide the nonce to the encrypted VM; the VM uses it as Report Data to generate an attestation report.
- The encrypted VM generates a key pair in memory.
- The VM uses the public key as Report Data to generate a second attestation report.
- In the trusted environment, verify both reports. The first proves the VM is trustworthy; the second carries a public key signed by AMD, which the malicious hypervisor cannot tamper with.
- Encrypt your secret (e.g., disk encryption key) using this public key.
- The VM uses its private key to decrypt the message and obtain the secret in memory.
Note: The initramfs must not trust terminal input—no shell access should be exposed. It should only accept data through this secure flow. Since the attestation report includes the initramfs hash, we can fully control its behavior by carefully writing the init script.
This process is relatively complete. As a shortcut, you could just generate and verify the second report—effectively using a nonce generated inside the VM.
Theory is sound, time to implement
Back on the trusted HomeLab, prepare an encrypted rootfs:
1 | truncate -s 5G rootfs_enc.raw |
Now prepare the initramfs with the “handshake” logic. The built-in /init script is too complex for me to fully understand, so I’ll replace it entirely—functionality remains intact.
Also, if you just installed cryptsetup while creating the rootfs, your unpacked initrd.img might not include it. Repack it. You may also need to add dm_integrity to /etc/initramfs-tools/modules and re-update and unpack initrd.img.
1 |
|
Pack this init script and all required tools into the initramfs. Compute the measurement value locally, then send everything to the server. Preparation is finally complete!
Start the VM, provide a random nonce locally, and you’ll get two report.bin files.
1 | [ 7.791052] Run /init as init process |
Locally, verify both reports’ signatures and confirm both measurement values match our local computation.
Then, verify the first report’s Report Data matches our random nonce, extract the public key from the second report’s Report Data, encrypt our LUKS key, and pass it to the VM.
1 | snpguest verify attestation -p genoa . report1.bin |
Paste the resulting base64-encoded encrypted key into the VM. Watch as it decrypts successfully and the boot continues—we’re now in a familiar Ubuntu 22.04 system!
1 | Enter encrypted LUKS key (age format, one-line base64):YWdlLWVuY3J5cHRpb24ub3JnL3YxCi0+IFgyNTUxOSBITW5Bclp1ZmNiU01yZmlicmhpWmJaMG1KUExRTW5DNnI4Nmo3K3FZNUBBCkZwQU5EelNtNjN2SDlQeUtBVEVGZSs0dk5WWldMTUhLSTlCblY3c3k2bE0KLS0tIFhWR3IvY0NUSkZuemplS0FRa0ozbGRvQ0Y4WS81NDQrdGJQUkVyMXBiRWsKR3IVz1uPSJtD/Uc9ojYZ2fChdRumQk7YxAKv5JUbffWvFevi/Lhyww== |
Soon after, the boot completes, and we see the login prompt. But don’t log in directly via QEMU console—it’s unsafe. We’ve already configured SSH and recorded the host key. Just connect securely via SSH.
A seemingly ordinary VM after successful boot
Afterword
This was my first time experimenting with trusted computing. Aside from manually building QEMU and the kernel, the overall process and tooling weren’t too complex. Compared to pure SEV/SEV-ES attestation, it’s much simpler now. I also got to revisit the Linux boot process and built a trust chain from UEFI firmware → kernel → initramfs → Ubuntu. I didn’t expect a minimal initramfs to be this straightforward when I started.
This blog came from a sudden idea—ultimately succeeding in building a secure computing environment on a machine where both hardware and software are fully untrusted. That’s pretty cool. Took me a few days, and who even remembers I just wanted to verify my restic backup on the server… Honestly, my files are so boring no one would care anyway.
This article is licensed under the CC BY-NC-SA 4.0 license.
Author: lyc8503, Article link: https://blog.lyc8503.net/en/post/amd-sev-snp/
If this article was helpful or interesting to you, consider buy me a coffee¬_¬
Feel free to comment in English below o/