The Malware Disruption case study (ACM, N.D.) presents a difficult ethical conflict between following the rules and protecting the public. On one side, the hosting company, Rogue Services, clearly violated the ACM Code of Ethics. By ignoring requests to take down malicious software and hiding behind a "no matter what" uptime pledge, they failed to "avoid harm" (Principle 1.2) and did not contribute to the "public good" (Principle 3.1) (ACM, 2018).
However, the response from the security vendors and government agencies is more complicated. To stop Rogue Services, they released a worm, which is a piece of software designed to spread through a network, to crash Rogue's systems. While this action successfully stopped the spam and ransomware, it technically violated Principle 2.8, which states that professionals should access computing resources only when authorized (ACM, 2018). The security team did not have permission to access Rogue's machines, even though Rogue was acting maliciously.
This type of action is often called "active defense" or "hacking back." While it solved the immediate problem, it sets a risky precedent. Holzer and Lerums (2016) argue that private entities taking the law into their own hands can lead to an escalation of conflict without proper legal oversight. If professionals start ignoring rules like Principle 2.8 because they believe the "ends justify the means," it could damage the trust that is essential to the computing profession.
Ultimately, while the worm was designed carefully to limit collateral damage (upholding the spirit of minimizing harm), the decision to bypass legal channels creates an ethical grey area. It suggests that when the law is too slow, technical force is an acceptable solution, which is a dangerous view for the industry to adopt.