Trust, but always verify. You are not immune.
I feel like PoC wasn’t the most clarifying of initials for a title here.
Sorry for that, this is the technology community though. PoC in tech has a different meaning than others, being Proof of Concept. Each community or in groups have their own acronyms even if it overlaps others. I did not think that acronym in this context needed clarification.
deleted by creator
The repo is about an exploit and then hundreds of people just run the script? Seems interesting
Not that unusual, unfortunately. The infosec community relies on researchers publishing PoC exploits in order for people to determine whether they’re affected or not by a given vulnerability, but that trust in PoCs can obviously be exploited.
Not everyone has the time or knowledge to develop their own PoCs, but you should definitely not use one if you can’t understand the PoC, which is unfortunately rather common.
Never, EVER, do anything security related while sleep deprived, drunk, high, having sex, or all of the above.
After that… no, don’t trust. Zero trust.
There are basic hygiene measures to run anything related to any exploit — including “just” PoCs — depending on how risky a total pwn would be:
- container
- VM
- separate machine
- airgapped
- airgapped in a faraday cage (tinfoil works wonders to kill WiFi, a cheap DBV stick turned SDR is great for making sure).
Reading through the code is nice, and should be done anyway from an educational point of view… but even when “sure”, basic hygiene still applies.
Keeping tokens in one VM (or a few), while running the exploit in another, is also a good idea. Stuff like ”Windows → WSL2 → Docker", works wonders (but beware of VSCode’s pass-through containers). Bonus points if passkeys and a fingerprint reader get involved. Extra bonus points for logging out before testing (if it asks to unlock any passkey… well, don’t), then logging out again afterwards.
What I’m not so sure about, is deleting the siphoned data without alerting the potential victims. Everyone kind of failed at security, but still. A heads up to rotate all keys, would be nice.
I’m sorry this happened, but it seems rather reckless of the author to be running “Malicious PoCs” on their “daily driver” (re: the PC they use for everything).
If I was in the habit of running “Malicious PoCs”, you can be certain it would be isolated from the rest of my system. This could be in a sandbox or a vm. Heck, just created a dedicated (one time use) “new user” would have been better than "Hey, let me just download and run some random shell script. Oh, it needs root? No problem!
This does kind of drive home some points. Obviously, once malware is running with your full user permissions, all bets are off. But there are some things that could have mitigated harm here.
The malware wasn’t just mining cryptocurrency—it was also stealing as much sensitive information as possible. It collected:
- SSH keys from ~/.ssh/
If you password-protect your SSH keys with a decent password, it will help address this. Now, the problem is that any software that can get at your SSH keys probably has a shot at also setting up some kind of keylogger system, but at least it makes it not a one-step process.
- Shell history from .bash_history and .zsh_history
Avoiding using sensitive data as command line arguments is a good habit to be in. They’re visible systemwide to all processes on a normal system, which already creates a meaningful leak on multiuser systems, and various pieces of command-line software go out of their way to avoid having passwords and the similar secrets passed on the command-line.
In this case, I assume that some of the goal may be looking for other hosts that the user might be sshing to, but best not to compromise other credentials here as well.
- AWS and Azure credentials from ~/.aws/ and ~/.azure/
Not familiar with the current forms of these, but I bet that they provide some way not to store unencrypted credentials there.
- Environment variables and system information
Environment variables are a really good place to avoid putting sensitive data, at least if one’s talking variables exported to all processes run by a user, because software that crashes and uploads a crash dump to God-knows-where will also tend to dump environment variables along with it, as it’s important debugging information. Storing credentials in an environment variable is not a good idea.
This experience was a harsh reminder to never blindly trust PoC exploits, especially ones that include random files like PDFs.
I feel like one thing that might help is software making it really easy to create a container that by-default runs in isolation with minimal access to the rest of the system, and then lets a user easily add individual permissions. I’ll sometimes use firejail, but it’s a “default-insecure” model, which really isn’t great for dealing with this sort of thing. Maybe use iptables or something to detect network access attempts and let a user approve per-host network access; you can’t simply block outbound network access for this sort of software, which is presumably demonstrating some kind of network-based exploit.
If you shouldn’t use sensitive information as command line arguments and also avoid environment variables for passwords, how should you pass such data to programs short of setting up a configuration file?
For the command line, do what OpenSSH does, take passwords on terminals.
For environment variables, the issue is passing them to all programs; you don’t want to put credentials in a
.bashenv
or similar.
Good writeup!
Definitely never good to run PoCs sight unseen; mostly not because of this kind of situation, but even just because different PoCs will have different results, and you need to know what to expect.
Also, if you see any level of obfuscation in PoC code, it’s more than likely malicious.