Review of the
Workshop on Ethics in Computer Security Research,
Tenerife, Canary Islands, Spain
January 28-29, 2010
Review by Vaibhav Garg and Debin Liu
3/17/2010
Disclaimer: The following is a review of the workshop is limited by the authors ability to comprehend the many dimensions of the various talks. If we make a mistake or quote someone incorrectly we ask for your apology in advance and when notified will try to rectify as soon as possible.
Disclaimer II: In some scenarios the speakers are Roger Dingledine for Roger, Paul Syverson for Paul, Lorrie Faith Cranor for Laurie and Sven Dietrich for Sven, David Dittrich for David. We apologize for using first names. It is not to imply familiarity, it was done only for simplicity. If you said something and you are not mentioned by name, we again apologize. Most of notes were taken by hand and in retrospect not very legible.
The workshop was held on January 28th, 2010. It was co-located with the fourteenth international conference on Financial Cryptography and Data Security. The workshop was held in Tenerife, Canary Islands, Spain. This report gives a short summary of each of the talks and then reports the questions asked. Where possible we will also list the person who asked the question. Slides of the talks and other information is available here: http://www.cs.stevens.edu/~spock/wecsr2010/program.html.
The keynote talk was given by Ken Fleischmann from the University of Maryland. His talk was titled, "Preaching what we practice: Teaching Ethics to Computer Security" and the slides are available here: http://www.cs.stevens.edu/~spock/wecsr2010/slides/FleischmannWECSR2010.ppt. He talked about the importance of awareness and how it is critical to make the computer security community realize the real world consequences of their research. He recommended the using existing individual value systems in order to get the researchers to realize the ethical implications of their work. He suggests that this 'fine tuning' would be work better than trying to make computer security researchers 'better people'.
Questions
Sven: Intercultural approach to ethics should be considered since
different security communities have different value systems. Maybe we
should look at the differences. The question is how do we go about
collecting this data?
A: Culture is broader than national culture. A survey based approach
can be considered for data collection since it is more scalable. But
it needs to be supplemented with interviews.
Values may be driven by culture for example by Islam in Lebanon or
political systems for example communism in China.
A: While spirituality and religion do affect value systems, they are
only one of the many factors.
Well but it can have significant consequences. For example, in Lebanon
if an employer does not pay well, the employee is not expected to be
ethical by the religion.
A. Yes, but we have to take into account the value systems of all
stakeholders not just one while at the same time allowing for
tradeoffs.
Sven: In Defcon it is perfectly acceptable to expose passwords. So
different security communities have different values. To study this
how do we setup a remote survey.
A: Look at the people who publish in security, get their info and send
them questions. We need to ask questions like religion, organization
along with where do they stand on critical issues. These surveys can
be anonymous.
Q: What about moral relativism?
A. While everyone might have different value systems, we need to
collaborate and make living documents that reflect our value systems
and keep it relevant at the same time.
Roger: How do we collect data on TOR usage? We have a bunch of questions to answer. For example, what kind of behavior do we see when people are anonymous? Can we deanonymize users by traffic or by content? Can we give them a Java applet to unmask them? What application protocols do they use? What language are the web pages? Do they use SSL? Do they check SSL fingerprints? There are some questions that I do not want to answer. But grad students come up to me and say, "I want to break the law in following ways." At the same time TOR funders want to see growth in users in places with human rights issues. We also want to react to blocking mechanisms quickly. Can we publish traffic trace? No, we should not. Cause if we all publish traffic traces, the aggregate data might reveal information that individual snippets do not. We should ask people to get IRB approval for collecting this kind of data. The only people who are writing papers are those who are doing it wrong. At the same time we do not want people to stop doing research because of legal hassles.
In general, Roger talked about the importance of doing research on anonymous networks and in particular data gathered from these anonymous networks. But he also points out the dangers of doing this kind of research. For example, if people using the TOR network thought that TOR was storing their activity data, they would probably switch to a different anonymizing network. There is also the cost of possible consequences. He talked about how researchers need to realize the implications of this research the importance of having a body like the IRB that can oversee research proposals to ensure ethical conduct. He concluded suggesting a shift from," Don't do that! You might get it wrong" to "Here are some guidelines for getting it right".
Questions
- Is pen register legal?
You need to fill a form out. But without authority it is a criminal
offense not a civil one. However, at the same time, feds would never
prosecute for this because they do not want a precedent to be set in
this case. There is also the storage law, which says that if your mail
has been in storage for 18 months it is not subject to wiretap laws.
- EFF suggests, "Stay the frak away from content" but they have vested
interest. How about the argument that you are not hurting anyone?
- Well the idea is that good people should be able to publish so we
need some guidelines.
- What about user's expectation of privacy
Roger: There are different jurisdictional provisions. US worries about
wiretapping. EU worries about data protection. Do not collect more
data than you need and do not mishandle the data that you have.
Paul: There is the question of what do you do with old data. If you
throw it out, how do the other researchers know that you collected
data and it was good?
Roger: We should not do anything that helps people launch attacks. Do
not collect data in secret. Do not have data sitting around. We can
let IRB decide, but they are not fully informed.
Lorrie: We can give IRB guidelines.
Roger: It is not always possible to do informed consent. We can try
for community consent. But irrespective of that we have to explain to
the user how we are protecting them.
There was a fairly heated debate and the eventual guidelines were:
Collect as little data as possible: Data minimization
Data should be published as aggregate: Data Aggregation
Only collect data that is publishable and then publish it: Transparency
David: We might disrupt the experience of good people when we try to investigate malware. This might make us appear as the bad guys and we might thus waste the time of law officers who deal with cyber crime who are looking for actual criminals. Every time we publish a research article we provide knowledge and tools to the bad guys to improve their attacks.
The talk basically stressed on the fact that researches need to work with the law enforcement agencies (LEA) instead of in isolation. Especially since malware is becoming more complex everyday. Used to be one could use the command and control channel to control a botnet now they have peer to peer botnets. It also talked about the nature of cyber vigilantes and how and whether researchers should fill that role.
Questions
Is it ethical to alter an active crime scene?
Roger: Maybe it is. If I know LEA is going to fail.
Paul: But it is hard to figure out if they are going to fail or succeed.
David: Maybe we should a certification for who gets to study criminals.
Serge Egelman: Vigilante justice is usually a consequence of
inadequate law enforcement. So anecdotically it seems law enforcement
has been bad.
Paul: There is also a national security aspect to this.
Sven: There are times when law enforcement cannot obtain data and
researchers can.
Rachel Greenstadt: Watching vs. vigilante. Maybe we need to have
thresholds over which intervention is justified.
The presentation was in four acts. It had the following actors: Adversary, Moral Agent, Security Personnel, Internet using public, and Researcher.
Act 1: The tragedy of the user
Security people usually have the same answers for the public. Internet
is bad, switch to Apple, run antivirus, patch, don't click on things,
firewall etc. The message has been the same despite the growth of the
number of threats. The motivation for attacks have changed as
well. Attacks are now targeted. Security has mostly been reactive. In
any other adversarial model like war we know that there can be no
victory without offense. We need to do this for computer security. We
need revolutionary vs. evolutionary thinking.
Act 2: Ethical standards for proactive research - Creation of new
threats is unethical.
Lets talk utilitarianism. To be ethical you need to show utility and
no net negative impact. Deontological must demonstrate no harm is
being done. So for any school of ethics we need a vetting process like
IRB. But malicious software under current guidelines does not require
ethics clearance. This is different from Biology where you have to go
to safety office first.
Act 3: Ethical Research Methods
We talk only about research methods and not impact of publishing. Safe
ones are like: mathematical modeling (no code), simulation, component
testing (if you can break you task into individual component, which by
themselves are harmless), existence proof (looking at viability),
Gendanke experiment (only the idea is important).
Act 4: Ethical dissemination
Questions
She talked about the difference in the growth of technology and the
slow rate at which law catches up with it. She qualifies the grey zone
between the two as the one to be most important for ethical
research. Belmont report came out with ethical guidelines for
biomedical research. Belmont 202 is for ICT research. Ethical Impact
Assessment or EIA is an extension of Privacy Impact Assessment or
PIA. The idea is that if you give researchers tools, they will embrace
them. Technical people do not like grey area.
Consent is a problem. Benefits should be shared between the
researchers and the research subject. Access the full range of risks,
minimize harm and then mitigate. In theory you don't break any
laws. But legal due diligence might be hard so you can change risk
posture by getting agreements like from ISPs.
Questions
C: I work with user studies data. Then there is network data and then
there are people who study criminal behavior. CS people have no idea
about IRB and how it applies to them. We need to improve the IRB
process. Guidelines for IRB focused on CS research. Dissemination of
results requires a completely different set of guidelines.
K: We are working on a document for research guidelines for CS
research. We require feedback so that the document is something that
is adopted. Publish your ethical impact like you publish your
methodology in a paper. We need to have a library for ethical impact
assessment. We need to document out impressions of impact so that we
can learn from the past.
S: A number of projects that I work on, it is not possible to get IRB
approval because you do not know who those systems are. We tried a
bunch of attacks that we did not disclose. We talked to various
vendors and got them to take action. The response was surprisingly
good. In 2000, they would consider this kind of research to be
adversarial. But things have changed. I recently saw a crypto
primitive that was about to be broken being preemptively phased out.
D: So you think some years ago this kind of ethical research would not
have been possible?
S: Yes. It would have only generated negative reaction but the
institutional mindset is changing.
K: Do you think there is a need for a formalized disclosure channel?
S: We have a current best practice where we notify the stakeholder and
then give them a time window usually 60-90 days.
C: Is it written down anywhere?
David: Who was the driving force?
S: The stakeholders. We also talked to CERT and told them we were
going to publish at Blackhat in 6 months. So we had 6 months to work
things out.
C: Maybe when we submit to conferences we should be asked to submit
and ethical impact statement.
S: I agree.
D: This is not completely unprecedented. Last year a paper was
rejected on ethical grounds. Other papers have provided ethical
justification and been accepted.
In general the panel discussed the need to do ethical research. Len
pointed out that it is now possible to do ethical research, that the
stakeholders are willing to learn. There was also a discussion over a
need or requirement for an ethical impact statement for research
dissemination in terms of say a publications. The ethical impact
statement can only be in the online form if the space on paper is a
constraint in terms of publishing. Len also raised the problem with
disclosing vulnerabilities before publishing. The vulnerabilities are
fixed because they are disclosed but then they may be found less
interesting by people who make the decision of publishing
them. Another idea was to build NDA into the disclosure of
vulnerabilities so the researcher can hold someone accountable if
their research is leaked before they can publish it.
Paul: What if you do not know what to recommend?
A: Don't disclose the attack in the NY times. Disclose it to
professionals working in the industry like AV companies.
Len: You can also give it to the ISPs, especially if it requires an
infrastructural change.
Sven: Or to CERT
David: I think we should require certification from people who want to
do this kind of research.
Lorrie: How do you propose to enforce it?
There was a back and forth discussion on this but no common ground was
reached other than CERT seemed like the best bet.
A FRAMEWORK FOR UNDERSTANDING AND APPLYING ETHICAL PRINCIPLES IN
NETWORK AND SECURITY RESEARCH (presenter: Erin Kenneally)
What do you mean by identifiable info?
It is a non-trivial problem. Everything seems identifiable. We have to
consider reasonable deanonymization. We have to look at probable
vs. possible.
Len & Roger: What do you with the information that is already out
there. By today's standard the deanonymization might lead to less than
probable chance, but in a few years with evolving technology it might
be very much probable.
That is true. But we can't look into the future. We can only judge by
the standard of the time.
PANEL: TOWARDS A CODE OF ETHICS FOR COMPUTER SECURITY RESEARCH
(panelists: Lorrie Faith Cranor, Erin Kenneally, Len Sassaman,
moderator: Sven Dietrich)
for brevity purposes they will referred here as C, K, S and D respectively