Following the attacks of 9/11 and the subsequent anthrax mailings (dubbed Amerithrax by the feds), funding priorities in basic research changed. Research money became available for projects to characterize infectious agents that could be deployed in bioterror attacks, as well as efforts to develop countermeasures, vaccines, and rapid detection and diagnosis methods. Under the terms of the international Biological and Toxin Weapons Convention , to which the US is a signatory, the development of offensive weapons is prohibited.
But, some point out, the same research that could be important in developing treatments or other positive public-health approaches to protect us could -- in the wrong hands -- be misused to cause harm. Finding the gene that makes a given bacterium especially virulent could help the good guys develop a way of reducing virulence; but it might also give the bad guys a way of making the bug more dangerous. This issue is referred to as "the dual-use problem," and it raises a number of ethical questions.
For starters, what work can/should be justified with regard to defensive bioweapons research? Does rehearsing bioterror scenarios increase the likelihood of their occurring, and if so, is it unethical to release such information? Should this work be classified, or is an "open-source" approach to information sharing actually a more effective means of protecting the public?
Want to learn more? Check out this podcast from the producers of a new PBS program on bioweapons research in the United States.