Logic bombs explained: Definition, examples, prevention
Credit: Fotolia / Thinkstock
A logic bomb is a set of instructions embedded in a software system that, if specified conditions are met, triggers a malicious payload to take actions against the operating system, application, or network. The actual code that does the dirty work, sometimes referred to as slag code, might be a standalone application or be hidden within a larger program.
While logic bombs are sometimes delivered via the same techniques that can infect your computer with viruses or other malware, more often they’re planted by insiders with privileged access to the system being attacked — and can therefore be quite tricky to detect.
A logic bomb is defined by three characteristics:
A logic bomb isn’t a virus, but it could be spread by one. Unlike a virus, the distinguishing characteristic of a logic bomb isn’t how it spreads, but how it’s triggered.
A quick note on terminology: Malware comes in different types, including viruses, worms, and Trojans, that are generally defined by how they spread and how they infect computers; the details vary, but by and large they are designed to find victims semi-autonomously. The part of the malware that carries out the attack, known as the payload, can work in different ways; some of these payloads themselves are logic bombs. For instance, the Stuxnet worm, created by US and Israeli intelligence to sabotage the Iranian nuclear program, has a payload that will activate only if it determines that it’s running on a computer that is part of a specific type of uranium enrichment facility.
That said, not all malicious code is malware, and not all logic bombs are delivered via viruses or their kin. In fact, as we’ll see in our examples, many logic bombs are hidden inside ordinary computer programs by the people who wrote those programs themselves.
As the Stuxnet example demonstrates, a logic bomb attack gets its name because the malicious code activates when some logical condition, or trigger, is satisfied: It can be explained as an if-then statement. There are two forms a logic bomb’s trigger can take: positive or negative. A positive trigger goes off if something happens, whereas a negative trigger goes off if something failsto happen. Stuxnet is a positive trigger: The worm analyzes the underlying hardware and if it matches the system it was designed to attack, it spins any attached uranium centrifuges fast enough to destroy them. There are other, somewhat more pedestrian types of positive triggers as well: A logic bomb may go off if someone attempts to open a specified file, for instance, or copy data from one directory to another.
A negative trigger is best undersood in terms of the sort of insider threats we noted as a common use case for a logic bomb. For instance, a disgruntled employee, suspecting they are about to be fired, may plant a logic bomb on the company servers that will erase valuable corporate data at 10 a.m. unless its creator intervenes. As long as the employee maintains access to the system, they can stop the bomb from going off, which may give them leverage in the dispute with their employer — or at least leave them satisfied that their firing will be followed by chaos once they’re gone.
The actual behavior of a logic bomb can range widely. When it comes to the insider threats that make up much of the logic bomb landscape, a few types of attack are particularly common, including file or hard drive deletions, either as a ransom threat or act of revenge, or data exfiltration, as part of a plan to use privileged information in future employment.
But truly, the things a logic bomb can do — the thenthat comes after the if — are only limited by the attacker’s skill and imagination. For instance, one enterprising soul managed to hide a cryptojacking logic bomb in public domain Python libraries that surreptitiously mined Bitcoin for the attacker’s benefit.
A logic bomb can be triggered by whatever specific event or condition its creator desire. Common triggers include:
You’ll sometimes see references to time bombs as a type of cyberattack; these are a subset of logic bombs, although some might consider them a closely related attack. A time bomb is a logic bomb whose trigger goes off at a specific time. In some ways, this might be considered the simplest type of logic that can go into a logic bomb. The purpose of writing this kind of trigger can be similar to a real, physical exploding time bomb: To give the attacker enough time to clear out of the area (in this case, the computer or network where the bomb was planted) to make it less likely for them to be affected or fingered as the attacker.
The example we gave above of a negative trigger is a more sophisticated variation on the time bomb concept, as its time deadline can be postponed by user action to create a sort of “dead man’s switch.”
Logic bombs are, by definition, malicious. The “bomb” in “logic bomb” is of course metaphorical, although in cases like Stuxnet that target operational technology, they can wreak havoc on the physical world. But even all-digital logic bombs get that name because they’re destructive.
There are, of course, other types of programs that might be superficially similar to logic bombs but not harmful—for instance, a program you’ve downloaded as a free trial might stop working after 15 days. But because you were told that when you downloaded it, that isn’t considered a logic bomb.
When triggered, logic bombs can wreak a range of havoc on targeted systems, including the following examples:
Logic bombs are a particularly pernicious type of attack because the attack code by its very nature may lie dormant for an extended period of time. In general, it’s difficult for even the best endpoint security software to sniff out code that isn’t executing.
Since some logic bombs are delivered via malware, one way to keep them off your systems is to follow anti-malware best practices:
But as we’ve seen, fighting malware isn’t enough to defuse all potential logic bombs. The cryptominers we mentioned above are an example of what’s known as a supply chain attack, in which an organization’s reuse of third-party code (open source libraries, in this case) becomes a problem when that code has a logic bomb hidden within it. And, of course, no antivirus program can protect you from a determined insider threat.
The best way to sniff out malicious code that’s being embedded in your own software, either deliberately by a disgruntled employee or inadvertently in the form of a third-party library, is to bake secure coding practices, like those that are part of the DevSecOps philosophy, into your development pipeline. These practices are meant to ensure that any code passes security tests before it’s put into production, and would prevent a lone wolf insider attacker from unilaterally changing code in an insecure way.
In 1982, a massive explosion disrupted the flow of natural gas in an important pipeline traversing Siberia. For years, a rumor has persisted that this was an act of CIA sabotage. The story goes that U.S. intelligence agents discovered that their Soviet counterparts were attempting to steal the computer code necessary to automate their pipeline from the West, since the native Soviet software industry wasn’t up to the task; so the Americans allowed the Soviets to make off with code with a logic bomb hidden in it that resulted in the destruction of the pipeline. This sabotage has sometimes been called the original logic bomb, although it’s never been confirmed by any of the parties involved, and there’s some evidence that the destruction may have just been the result of good old-fashioned incompetence.
While we may never know the truth of what happened to that pipeline, there are plenty of well-documented logic bomb attacks:
If you’d like to see the code for a simple example of a logic bomb, there’s a GitHub repository for the Christmas Logic Bomb, written in Python. This code is a time bomb that activates on Christmas Day and displays a festive message — it doesn’t do any harm, but it’s a good way to see how this kind of attack works.