Review of
Financial Cryptography,
the Workshop on Real-life Cryptographic protocols and Standardization,
and the
Workshop on Ethics in Computer Security Research,
St. Lucia
February 28-March 4, 2011
Review by Omar Choudary
3/13/2011
The Financial Cryptography and Data Security conference (FC'11) took place in Rodney Bay, St. Lucia, 28 February to March 4, 2011. In attendance were about 100 participants from academia, industry, and government. The FC program chair was George Danezis, the general chair was Steven Murdoch, and the local arrangement chair was Fabian Monrose.
The main conference took place at the conference facilities at the Bay
Gardens Beach Resort in Rodney Bay February 28 to March 3, 2011. The
three affiliated workshops
On Saturday, March 5, 2011, a full-day excursion took a large group of participants on the 100-person-capacity catamaran Tango Too down the west coast of St. Lucia to Soufriere. There was ample opportunity to chat and mingle with other participants.
Presented here are two sets of notes, one from Omar Choudary for FC'11 and some of the three workshops, and another from Tyler Moore, for the Workshop on Ethics in Computer Security Research.
The talk was about the speaker's experience in building a bank. The main topics discussed were Tesco Mobile: moile payments within the Tesco bank; 3000-4000 stores within UK; 20% of population walks through tesco in any given week; intention to add banking capabilities in Tesco stores which provides large number of branches; main task: migrate RBS customers to Tesco new infrastructure (all in less than 3 years). Jol is responsible for security architecture and deployment of security apps. The security architecture service provides non-trivial challenges: fit into organisation, make people understand the security issues and how they affect the overall system. Then Jol explained the strategy to get 90-95% discount on software solutions: get first 2 players and tell them you will choose on a brute manner, up to 50% less the price they ask; then get the third player and put him through at a lower target price then put all 3 compete at an even lower price.
Data from online app where people put some personal data in order to get financial credit. Name replaced by nick name. Only age and city required. Analysis based on: a) subjective testing, trained users categorising the amount/type of information put by users; b) objective, algorithms based on length of input data and some average of data by other users that could identify yourself. Result: users generally put (voluntarily) enought data to be identified.
It's about the Benjamins61% of US computers infected, while people saying they want more security, so why the infections? What are the mitigation techniques? Experiment to pay people to install unknown executable using Mechanical Turk (Amazon) as experimental platform (paying $0.01, x5, x 10, x50, x100). Program would run for 1 hour and monitor if the user would quit. After the hour user gets the code and claims payment. For 50% of users they asked root privilege. Participation: 291/141/64 (viewed, download, exec) at 0.01$, through 1105/738/474 at 1$. People with antivirus had more malware than people without AV. Feedback based on the Mechanical Turk. Even security-diligent people would admit to infect their machine once the price was right.
Session: Privacy and voting (chair Nick Hopper)By sharing a location we share part of our identity. Using data-mining we infer identity from location. The paper is about how many location samples do we need to get identity? (Privacy erosion) Type vs quantity of location information. Experiment EPFL + Nokia Research (from 2009, still going) to analyse location (Wi-fi, GPS, GSM, etc.) and other uses (e.g. Bluetooth) from each of 150 participants. Used state-of-the art algorithms to infer identity from location (using points of interest such as Home/Work), based on clustering; see On the anonymity of Home/Work Location Pairs, K. Partridge and P. Golle. Infer also points of interests. Results show Home/Work anonymity set - how may users share their home/work location with others. For most location types, just a few location points give a good uPOI estimate; the uPOI gives a probability of being at home, work, supermarket, etc. It seems to be a kind of scaling law: with just small location data you can get a large amount of identity data.
Questions
Q: What is the clustering based on?.
Answer: clustering at the office level, around 50 square meters area
(based on research vs legal aspects).
Coercion-resistance as the main challenge analysed; although there are other issues. 2 previous methods for protection: JCJ/Civitas vs AFT. JCJ allows revocation. Selections is about getting best of both: linear tallying + revocation. Authentication using passwords. Panic Passwords creates something that looks genuine but is bad; useful to give false information so we don't reveal our real vote. Building blocks: a) exponential ElGamal: Enc(g^m) where Enc(m) is regular ElGamal. Panic Passwords: basic solution (giving 2 passwords) doesn't work because I can just ask for both. 5P system: passwords are 5 words away from a dictionary. Protocol: a) registration: soundness is bad; b) vote: create 5 values (B, c', pi_1, g^p, pi_2); c', pi_1 used for Roster/ReRandomisation; g^p the encoded password; g^p, pi_2 used as the voting proof. Proof by Security Game (Sketch). The solution is coercion-resistant if voter achieve goal with certainty when deceiving and adversary cannot distinguish between genuine and fake vote. Conclusion: over the shoulder coercion and vote selling can be mitigated but still requires in-person interaction; still issue with voter's untrusted computer.
QuestionsAnonymity networks: Tor and AN.ON. Tor: onion router mechanism. An.On: each new TCP channel encrypted with mix keys. Similarities between the two, but An.On lacks replay protection and relay cell. However AN.ON has better latency parameters, so the authors want to include the 2 properties missing. Attacker: monitors link from user to first mix and wants to find link user-website. Redirect attack: make user visit the attacker website and use browsing history and other data. Can be done by modifying the HTTP response (from 200 to 302 and then giving the attacker's website). 24% success rate. Replay attack: with and without load.
QuestionsCase study: Absolute Manage. Could allow remote admin to see the person in front using camera. Absolute Manage analysis. Communication over 2 channels, port 3971, 3970. Problems: broken encryption and missing authentication. Broken encryption: Blowfish in ECB mode, hard-coded keys, keys are text phrases, revealed by running strings command. Authentication (none): just based on seed value (which is the serial number in UTF) encrypted using the known key. It is possible to actually ask for the seed value: set NeedSeedValue to true.
Questions.Many reseller chains sell exclusively digital media: Amazon, iTunes; resellers create a large chain. Customer wants to verify that the reseller did purchase the item from an honest chain. Digital signatures and DRM do not really solve the problem here: digital signature does not verify a given reseller. DRM cannot really deal with untrusted reseller (e.g. Apple Fairplay system where Apple is assumed to be trusted). Threat model: spoofing, counterfeiting; both parties would like anonymity and unlinkability. Solution using the Tagged Transaction Protocol: relies on a Tag Generation Center Building blocks: a) signature scheme with provable security (e.g. RSA with PSS padding scheme); b) zero knowledge proofs. What's in the tag: Tag={pk_x, L_x, pk_tag, a}{sk_TGC}. Main steps of protocol consist in a) registration: done once per reseller; b) generate tag: reseller->supplier ->TGC->supplier->reselller. Possible to reissue a tag. TGC allows tag verification.
QuestionsAbout loyalty cards: used for data mining, etc. Sometimes they don't even give discounts but raise price artificially and then lower the price to the actual value once you give your personal card. How to make an infrastructure through which consumers can make linkage across multiple events difficult? Solution based on randomising the club card you are using: ShopAnon. Different than e-Cash. Didn't deal with Points Allocator system. Assumptions: a) users can use their mobile phone (e.g. to store multiple bar codes); b) retailers can ban cards if they wish. Architecture: a) you open the phone and then the appropriate bar code pops up; b) based on your location and a server; c) using Oblivious Transfer: make it hard for server to know which tag sent to you. Tests show that 95-100 percent of the times the barcode on the phone worked.
Questions
CAs give certificates to anyone for money. Browsers accept a large
number of "trusted" certificates, especially government
CAs. Governments unlikely to use own CAs to sign malicious
certificates. TLS statement: CAs should be carefully verified before
signing. Packet Forensic tool used to detect network topology,
including man in the middle. Christian defined a new attack, called
Compelled Certificate Creation Attack, where the government asks a
company for a new signing CA. A solution is based on assumption that
you trust CA changes between a country but there is a problem if you
go from US-based to Russia-based CA. CA/Browser Forum (incl. Paypal,
MS, Google, Mozilla, etc.) is a cabal and must be replaced, because
their meetings are held private and by invitation only. They decide
all that goes into browser.
Questions
Ian Goldberg: Don't think there is any cabal there. Are we going to
see 6 more weeks of this SSL stuff or the forum will just give up?
A: probably the forum will not give up as the revenue stream is too high.
Serge Egelman: what hope do you have for addressing better the
CA/trusting issue?
Answer: not advocating for my system to be deployed, but to get other
people involved and trying to bring a better system.
Paper focused on public keys. Security Model: adversary can eavesdrop, modify or drop messages; an oracle is also available. 2 types of attacks on randomness were presented. Reset-1: the randomness is completely controlled by the adversary. Reset-2: the adversary doesn't know the randomness but he is able to make the same value for different sessions (see Reset Attack against Virtual Machines - NDSS 2010). Protocols investigated: a) SIG-DH ISO, SIGMA/IKE, JFK, if used with DS the reset attack can give the key from 2 signatures; b) HMQV: implicit authentication but vulnerable to the reset attack, see paper by Menezes and Ustaoglu in Journal of Applied Cryptography, vol. 2, no. 2, 2010. Reset-1 attack model: a) randomness chosen by adversary; b) does not consider forward secrecy. Reset-2 considers forward secrecy. Without this Reset-1 is a stronger attack. The goal is to design and proof the security of a protocol based on HMQV. Construction: a) transformation from Reset-2 to Reset-1 by using a secure pseudo-random function that will make the randomness different across sessions; b) use a DS scheme with forward secrecy and apply the PRF transformation.
QuestionsGoal: enable anyone to verify that digital photographs are real (spot modifications). Maybe some time in the future the camera manufacture can include crypto module to sign images. A photo editor would have a limited access to update the signature just for a set of allowed modifications. Canon already has some history on using MAC-enabled usb sticks. Challenge: supporting post-capture edits. Idea based on using Markle Hash Trees, but these don't scale well: for 512x512 image we need 12000 witnesses proving. Better novel idea: 2-Dimensional Merkle Hash using partial overlapping hashing. In this way for an image 512x512 just 1500 witnesses needed. Croppable Signatures used for verification. Other image operations supported. Cropping based on DCT: scaling I means cropping F. We get scalable signatures by taking the DCT of image for representation. We can combine scaling and cropping to support signatures over these operations. Paper contains proofs.
QuestionshPIN/hTAN: A lightweight and low cost e-banking solution against untrusted computers. Motivation: untrusted computers are a big problem for e-banking. Existing solutions suffer from security-usability dilemma. Authors' solution: hPIN/hTAN, simplistic design + open framework. Two parts: hPIN for login + hTAN for transaction. Three h-s: hardware + hashing + human. Three no-s: no keypad + no OOB channel + no encryption. Proof of concept + user study. Better security-usability. Assumption: attacker has full control of the user's computer. hPIN for login: encrypted text from keyboard, you have a translation keypad on the usb token. E.g. to input 4318 you input ajwe on the computer; now the banking system will know how to decrypt ajwe to 4318. Security aspects: PIN confidentiality, user/server authentication. Usability test shows: success rate 91%, median login time 27.5 s, to complete transaction 70 s using ATmega32 @ 16 MHz. Changes required to client/server.
QuestionsRoss Anderson. Interesting time in dynamics of payments systems. Two sides to the problem: a) you have to appeal to customers; b) big changes happen rarely. US has fight between retailers and customers: Walmart vs VISA/MasterCard. Issue with contactless where no PIN is used. How do we move towards a secure element in NFC-technology? Useful in many situations and markets such as the one in Affrica. There are no vendor plans to create secure applets that are inserted into the secure element of the phone; they just have an oyster-card-like tapped on the back. What we want: easy/secure way to do transactions. Hardware needs to be able to cope with new system, and maybe remove untraceability. Such payment schemes are going to take VISA/MasterCard out of the markets and they are very aware and afraid of this, so will try to limit market changes and innovation. All of this will involve some kind of legal discussions and new laws. What to do to make federated authentication work? Economic vision: how to do it well when your phone has 5 card tokens inside? What happens when your phone is taken/stolen? How many banks/institutions you need to call? Who is going to take up on responsibility? How to change business model such that banks and other institutions compete to be your friend?
Ahmad-Reza Sadeghi. Based on discussions with German banks, security is not an issue but the business model. Social networks will be the basis for financial systems across different communities, exchanging information, credit and other financial stuff. Banks are observing this market and will probably create new communities to target the social factors. Injecting fake information in order to profit on the new system, will create a kind of anti-social network, where people will be afraid of all the social ads. We could see freedom fight vs terrorist patterns as users start to use the social network technologies.
Steven M. Bellovin. Not sure if banks are stupid, greedy or ignorant. Banks design on the method "follow the money". Security analysis: fraud costs that much, security costs almost the same, so they really don't care. Banks don't seem to understand today's/yesterday's cyber attacks. They seem to understand just the physical attacks: very good at designing resistant vaults. Economic incentives of existing systems are wrong.
Lenore D. Zuck. Focus on financial infrastructure in the US. There is very limited research and academic knowledge in the financial sector infrastructure. Only possible if we can come with a good funding proposal. Effort must come from academic community, to make the funding organizations move into pushing money. There are several areas which have received a lot of funding, e.g. botnets. FBI, NSF, others have been channeling money into this. NSF organising workshop on security of financial infrastructure (late 2009); result is that financial sector and academic community speak different languages. We need to get to a common sense of understanding. US payment systems settles 4.3 trillion $ a day VISA total amount is 5.4 trillion $ a year FBIIC and other organisations were organising a workshop for 2 days; financial community sees as having no threat, minor concerns on vanilla things: software correctness and insider threats. Difficult to openly admit that they have problems. Yesterday there was an attack on Morgan Stanley by Chinese hackers. Generally the systems are not secure but it is hard to analyse as they are closed systems. NIST and DSA are starting a funding program for financial security infrastructure, giving 2-5 million $, so basically nothing. Outcome: need to put more pressure to get more money.
Questions
Motivation: censorship. Flavours of censorship: a) block
non-permitted content, focus of the talk. b) allow only permitted
content. Proxies: same resource, different name/location, are
circumvented by content-based filtering; counter by encryption,
obfuscation, etc. Problem: China blocking Tor, repeatedly by
enumerating Tor routers and shut down those. Current stats: 600 direct
uses, 2000 via bridges. Authors focus on proxy dissemination problem:
how to provide proxy service without having the proxy blocked? If
advertised too broadly they are quickly censored, advertise too low
and we get just few users. Solve by recasting this into an
optimisation problem. Maximise number of hours/per user of
usage. Mechanism: let users decide how to disseminate
addresses. Solution details: a) measurement of service; b) evaluate
registered user performance; c) compute Yield: how many users a
registered user has attracted, where multiple users may be advertising
same proxy. System overview: a) administrator controls database; b)
registered users get proxy addresses; c) they advertise then the proxy
addresses and attract users; d) attracted users use DNS to lookup
proxy/website; e) feedback loop: DNS server and proxy report usage
stats. Using measurements, the system compares number of new users
versus risk to proxy; if good effect then system allows user to
advertise, else skips the user.
Questions.
John McHugh: any sensitive analysis on how registration should be done?
Answer: there are several ways including private link, blog, etc. it's
up to the users.
John McHugh: how to make sure the disseminated proxies do not get all detected?
Answer: assumption is that users are motivated to increase security.
Tyler Moore: how does this defend against Chinese users participating
into the system (active adversary)?
Answer: in the ideal case the sets of registered users would be
disjoint. But indeed there is that problem, this is just an idea.
Steven Murdoch: the advantage of tor is to get users from many
groups. But in this system you start to see certain nodes created by
groups of people and that can be bad since you just see one user and
then you take up all the users connecting to the bridge.
Answer: yes, that is a weakness.
Roger Dingledine: there is concern on the fact that you count
user-hours. As adversary I pump user-hours artificially. How do you
fight against this?
Answer: it comes down to how to define what is an useful
service. Still don't have a good answer to that.
BNymble
(presented by Nick Hopper)
Problem with collaborative websites (e.g. Wikipedia): posting
unproductive (stupid) stuff on them. Methods to counter-attack include
blocking IP of such users. Problem in countries where access is
limited e.g. China because of firewalls; solution: use Tor or other
proxy methods to circumvent restriction. Dilema: if a stupid user is
using proxies what to do? Currently blocking all proxies gives many
issues as the Chinese users cannot contribute. Idea: use a kind of
mask on top of Tor to prove to wikipedia that so far they haven't done
anything wrong. Provide a kind of anonymous system to ban users based
on bad things. Need unlinkability + blacklistability, also must be
very cheap. There is an existing solution, Nymble: solves the problem
by introducing pair of trusted parties. There are two managers:
pseudo-manager (PM) and Nymble-manager (NM). User gets unbreakable and
unlinkable id (Nym), user sends the Nym to NM and gets tokens (n1,
.. nw). Then user talks to Wikipedia using the tokens; if the user
misbehaves the tokens are sent to NM, which extracts the feature
tokens n1...nw and send them to Wikipedia to block user. Steps: a)
registration; b) Nymble issuing based on hash function over a time
sequence. Problem if NM talks to PM as the user-action pairs are
revealed. New solutions: a) BLAC: very private, not scalable; b)
Nymbler/Jack: variants of Nymble with better privacy (no NM/PM
problem); c) Author's new contribution: BNymble, just like Nymble but
more scalable. Nym is replaced by an asymmetric token to provide a
signature.
Questions.
John McHugh: you must record the token that was used in every
action. So do you have forward linkability?
Answer: in the original Nymble the system actually take care that you
only get a good new id.
Towards Secure Bioinformatic Services
(presented by Stefan Katzenbeisser)
Issue: how to securely process genome data? Problems: genome data is
very sensitive. Can be used in many things in the future. Idea: how to
process privately the genome data using a 2-party computation
protocol. Physician does not need to reveal data to the
bio-informatics institute. At the end of processing he gets the
desired result using some kind of query. There are lots of previous
work, some describing how to encrypt genome data and recognise
patterns to analyse it. But problems like cancer cannot be reliable
detected using string matching and pattern analysis. Better tools
include Hidden Markov Models (HMMs): all transitions are
probabilistic. There are proposed solutions using a secure computation
of probabilistic HMM Query,. Problems on the length of integer values
since they are too short and miss data. The author's contribution:
secure computation with non-integer value; values are approximated
using a logarithmic representation. This provides secure and efficient
arithmetic operations, with constant error. Implementation: code the
floating point value as an integer triple (ro, sigma, tao). ro: is or
not 0; sigma: sign; tao: logarithmic representation. Multiplication is
simple, but addition is very complicated. So authors choose to create
a table with all the possible values of the computation: expectation
is to be just a few thousand computations if the base is chosen
correctly. With this trick the addition gets much more acceptable:
90-140 secure additions per second on 2.1 GHz computer. Experiments
show the method works, with high accuracy obtained despite heavy
quantization.
Questions.
Moti Yung: two papers in the past on this: e.g. O2 - computing on rational.
Answer: yes, they have been checked, but this is a different method.
Kirill Levchenko: why do you take the model where the physical
institute is trusted and the other not?
Answer: it's a matter of regulations and policy. We used this to
analyse very personal data that you don't want to give to your
physician.
Worked at: IBM, RSA, Google, visiting at Columbia. There are 3 types of crypto: a) abstract: models, quantifying adversarial model, definition of implementation and proof of security; b) applied: looks at systems context and applies a model to that system; c) actual: actually apply the cryptographic primitives to the models. Symmetric public key encryption - Crypto 85 Moti with Galil and Haber, as a student; method much more scalable: public key only in the server; the principle applies to SSL/TLS. At IBM he worked in: DSA system, open systems, single sign-on products. The Systematic Design of Two-Party Authentication Protocols were motivating work for protocol analysis (provable security) by Bellare Rogaway. On the Internet stack (results from 94-97): Scalability and Flexibility in Authentication Service, Infocom; first crypto protocols with a stateless per connection server, using authenticated encryption before it was defined; extensively used in encrypted cookies. How to do crypto within engineering? Traditional order: business needs, feasibility, functionality, realization, usability + performance, reliability, security, privacy (if they get here); crypto combined with others: system functions, performance, usability; crypto is best applied when these conflicts do not apply, and is hidden from the end user. The goal is to make the crypto help somehow the performance; The need for crypto might come from intrinsic sources. Important: get involved early, don't impose solutions, take time to analyse the system: your time will come. Design the crypto that is optimised for the task but which can be used for other possible future tasks. He then moved to consulting: Greek national lottery project. Designed, implemented, inspected the working back-end; working several years, first year revenues got up 35 per cent. Then got used by Goldman Sachs. Problem tackled: they needed a very secure system (security + crypto); they wanted lottery every 15 minutes; cannot just use Unix random generator, as the Swedish did. Moti came to scrutinize the system. He made the design such that even designers and internal guys cannot bypass the system. Critical system described in "Electronic National Lottery" - Financial Crypto 2004; was robust because the crypto was done right. Special requirements: cryptographic robustness; protection against various attacks. E.g. if the main randomness gets cooled down, the system uses some other source as well; concurrency analysis; several time constraint (not too early, not too late); availability; In the design he used whatever was needed from crypto, but not more! Recent project: AdX (at Google), used for advertisement exchange/ Ads as payment instrument; ads are the DUAL to micro-payments. The AdX system is the one responsible to display the ads on websites that want to display them for money. Other existing systems: static policy, specify in advance which ads to display. New system: AdX; real time, dynamic system; companies are competing for what advertisements to put on the website. AdX Model: a) viewer goes to publisher; b) publisher sends price to ad exchange (AdX); c) via ad network all advertisers give a price they are willing to pay; d) auction made to determine who wins; e) publisher gets ad and advertiser pays based on the 2nd price model. Many constraints on place. If the process takes too long you cannot charge for the ad; biggest exchange in the world transaction-wide: billion auctions per day. Moti joined the project to work on the security of bids; there was a gap between service agreement and engineering. Crypto was the solution; the important issue was in the price that was embedded in the iFrame given to the user. Cannot create multiple data flows because of time, resources, etc. Cannot use standard solutions (SSL) because of time constrained. Provide solution for secrecy and integrity. Use authenticated encryption, pseudo random function based encryption, one-time pad for each auction, MAC the cipher text. Advantages: fast, semantic security flexible utility. Privacy and data liberation issue: end of day/month Google releases data to agencies. Solution: use crypto to refer to user cookie within AdX; avoid agencies to extract information about users. Michael Rabin in sabatical at Google and then with Moti designed "Auction Verification System" for dispute resolution for Adx. Conclusions: a) attacker is less clear; b) envision attacks that are not even there some times; c) crypto not used to achieve ultimate security but rather to shift exposure; d) many times we assume things that are really broken - e.g. public key infrastructure; e) need to take exact cost into account as well as all the communication patterns; f) be a good cryptographer helps; g) important to collaborate; h) crypto keeps the field much alive.
Session: Crypto (chair Ahmad-Reza Sadeghi)In the SSL/TLS handshake (using RSA) 2 roundtrips must be completed before application starts. Problem: if ever a server key is compromised all the keys made with that key are compromised. We want forward security, so we use the Diffie-Helman key exchange. During hello we choose which cipher we want to use; sever and client exchange DH keys using RSA. Server performance varies for RSA handshake: 1 private RSA operation; Elliptic Curve (EC) DH handshake: 1 private RSA operation (sign) + 2 (EC) DH group operations. Need something to try improving the performance. Set of named curves recommended for TLS in RFC 4492. The problem is that we need to support all; so we try to make one of then faster. OpenSSL elliptic curve library: support all but is slow-ish. There are some fast implementations available: bench.cr.yp.to/ebats.html. Emilia picked NIST P-224 as the curve to be made faster: http://www.secg.org. Implementation in OpenSSL, NSS. 224-bit field = 112-bit security; good and fits into 64-bit registers. NIST P-224 implementation: implementation for 64-bit processors; redesigned field arithmetic; throw away bignum library; use unsigned int representation; branch-free modular reduction; constant-time resistant to software side-channel attacks; Eliminating branches using a select function based on bitmask to select between 2 possible results: avoids different time on branches; issue with multiplication which is not constant-time so made constant-time table lookups. Use select() repeatedly to avoid timing attacks; improvements over all ECDH operations and ECDSA verify (64-bit optimisation); twice as fast for RSA-1024 + ECDH-224 than older implementation. The implementation will be released in OpenSSL 1.0.1 (ftp://ftp.openssl.org/snapshot/).
Security considerations include: a) session caching; SSL allows session caching on the server side, client resumes session by sending the session ID; but this can be bad as an attacker can retrieve session key from cache. But cache can be disabled; b) session tickets; TLS adds a session ticket mechanism for stateless session resumption. Here client stores session information, but this now depends on the server's long term key; but is very bad as it conflicts with forward security. Server key compromise allows the attacker to decrypt the ticket and obtain the session key. A solution might be to disable session tickets. c) false start: optimistic client that believes it has all the information from the server; start sending app data before the finish message. Used to improve performance, but an attacker can impersonate server and use a weak cipher; client should refuse false start with a weak cipher. Firefox/Chrome are implementing this. d) snap start: predict all the server messages and certificates and start sending application data from the beginning; 0 delay but this doesn't give forward security at all
Towards real-life implementation of signature schemes from the strong RSA assumption
EMV signature scheme based on ISO/IEC
9796-2 (RSA signature). SDA-IPKD (static data authentication - issuer
public key data) used on most cards. CNTW attack: forgery method
proposed by Coron-Naccache-Tibouchi-Weinmann in Crypto2009. Attack can
forge ISO/IEC 9796-2 signature with practical cost: $800 for 2048-bit
RSA modulus using Amazon EC2; attack cannot be done using actual
cost. Contributions of the authors are to re-evaluate the cost of CNTW
attack in detail. Coron et al. roughly estimated the forging cost of
EMV signature. Cost estimate the attack under all the cases that each
data in D1-D7 fields of signature is alterable or fixed. Overview of
CNTW attack: a) represent a signed message as a multiplication of
multiple messages; b) forged signature is obtained by correct
signature over the combination of these multiple messages, where some
of them might not be genuine. The idea is to represent v = a*u - b*N,
where u is the signature; then find parameters a and b such as to
accelerate computation. Conclusion: EMV signature is forged with less
than $2000.
Questions:
Omar Choudary: does this help to break the bank transaction?
Answer: Not sure (contact main author).
About Christopher: privacy researcher (Washington DC), advocate and
PhD candidate. Focus on ISP/telco assisted government surveillance. He
was the first ever in-house technologist at the US federal trade
commission. In Tor the communication from Alice and Bob is via
multiple servers/relays such as no single organization can control all
of them. Nasty people might be running Tor servers. There are active
vs passive rogue servers; is possible to detect active attacks (MITM
servers) and Tor can block these. But there is nothing we can do for
passive data collection. Government will not disclose passive
listening, hackers cannot really be stopped, but what about
researchers who spy on Tor? There are bad example of research
studies. McCoy et al (PETS '08): shining a light on dark places;
geolocate users; created exit node Tor server. Researchers did not
seek or obtain prior legal analysis of their network; only asked a few
minutes of a law processors; therefore the community was not very
happy. Their university IRB (Institution Review Board) said that no
rules were violated. Castelluccia et al. ('10) made a study on private
information disclosure from web searches with stolen session cookies
(captured over Wi-Fi network); they got data from 500-600 people/day
from their network while sniffing. From 10 users they got opt-in
consent for actively hack accounts. 1803 distinct Google users, 46% of
which were logged into their accounts. The privacy of colleagues was
much more considered than Tor users. Therefore we are unsure why they
thought it was ok to sniff against people using Tor. Conclusions: a)
first study specific to Tor, second using Tor just as a shortcut to
more data; b) several other studies since (even ones awarded best
paper); c) problem here with privacy of Tor users: something should be
done; d) also there is a problem in that this kind of action as a
violation is not well documented. McCoy et al. don't provide info on
their web page about the negative community response. Should we
discourage Tor snooping research? In Christian's perspective
definitely Yes. Should it be illegal if the FI does it? Google has
engaged in a massive campaign of car-driving and Wi-Fi payload packet
sniffing: claims this is not illegal therefore of course the FBI
thinks it is clearly ok. We should establish standards for ethical
work, and minimize user data collection and retention. Research should
be legal in the country where it is performed. How to enforce the
standard: reject academic papers that do not respect ethical
considerations, at least on the top conferences; e.g. SOUPS now
requires such kind of enforcement.
Questions.
Question: tcp-dump is not actually illegal in all research organisations.
Answer: actually in US is illegal with minor exception that are
related to the health of the network.
Ross Anderson: not sure this is entirely a positive thing. In medical
health the data records are made public for research, although there
are many contra-arguments for privacy. Researchers make contributions
based on that data. So there are disputes.
Answer: I think the situation we have now is even worse than in the
Tor example.
George Danezis: I think the discussion is irrelevant as
it is based on FBI and specific regulations, not general ethic point
of view.
Answer: we think there is a general ethic problem on the
fact that researchers are sniffing for random reasons, regardless of
local regulation. John McHugh: there is a problem if the Tor exit
network is on a weak-protected Wi-Fi link.
George: we published a paper here that shows how to break into
facilities (RFID related). If we try to make such publications hard
this will break against the principle of security disclosure for the
goal of enforcing security.
Answer: if we don't establish research community standards, then
people in DC will, and will do it in a bad way.
Tyler Moore: an easy way to enforce the standard might be to publish a
set of guidelines on the Tor website.
Answer: pretty good idea. Maybe a few bullet-points to start using
your server.
Ethical Dilemmas in Take-down Research
(presented by Tyler Moore)
Tyler made observational studies on phishing attacks, trying to understand how the banks respond to phishing attacks, in particular how websites are taken down. Several papers fron 2007 onwards with R. Clayton. Dilemmas: 1) should researchers notify affected parties in order to expedite take-down? Initially just observing, and then asked who to tell about it. Finally decided to leave it as it is since there is no clear path for that; is there any general research conclusion from this? 2) should researchers intervene to assist victims? Tyler and Richard reported 414 compromised users whose details were recovered from phishing websites. They were stumbled in the situation were you feel like obligated to act; therefore (because of financial/time cost) you try to avoid situations where you get personal data. See Rod Rasmussen: Internet Indentity; big step from reporting credentials to give them to the admin. 3) should researchers fabricate plausible content to conduct pure experiments of take-down? There were several attempts: using fake copyright material and then ask ISPs to act. Problem is that you use the resources for real attack in research experiences. 4) should researchers collect world-readable data from private locations? Some websites publish private data on a public-readable location; they decided to actively look for this kind of data in order to see how much it takes for authorities to take down such websites. In 2007 they got a nasty letter from one of the websites they made public. Therefore from then on they collect data but they anonymise it. But what if our analysis will assist criminals? e.g. by revealing "what to avoid" in published papers. Didn't get to any conclusions after many analysis on this problem. Our goal is to try to help defenders by saying how attackers work 5) should investigatory techniques be revealed? If we reveal our methods then the attacker can adapt its mechanisms. Example case with Mr-Brain: stealing phish from fraudsters; they knew but didn't disclosed and then another research group made the disclosure. The problem is that is not very scientific if you keep your methods hidden, therefore authors generally disclose most but not all the methods. 6) when should datasets be made public or kept secret? See Phish Tank: all phishing attempts get reported here. This is a classic battle: banks don't like this as it provides a history of which banks have been compromised; so we need to think to what extent we disclose data; we need a metric and a system to determine which party is this data helping more. Tyler and Richard did a test on Phish Tank to see if attackers can re-compromise website using Phish Tank information. Result: websites that appeared on Phish Tank were re-compromised less than others. Conclusions: a) disclosing more data leads to better security; b) sharing data between take-down companies would reduce phishing website lifetimes.
Questions.Motivation: continuous sharing of real user data is necessary and useful for research, but data is held by operators (ISPs, etc.), not researchers. Why should they share these data? What are the incentives? Problems: avoidance of error, negligence or ethical consideration violation. There are existing data sharing models (industry-academia): a) interns get shared data; but is very limited in terms of the work they can do and data they can access, and limited by financial factor; b) sponsored research, which allows good data but might make an unfair good publication just for the student with access to data; c) clearing houses, provide a large amount of data (DHS, internet traffic archive, DatCat, ITA, ISC SIE, PREDICT). Issues: is this data generated by these houses? does the data depend on other factors we don't control? Ethical considerations on data sharing: a) opennes, pursuit of knowledge; b) how does this influence a company business model? c) should Symantec provide all data to McAffee? d) issues with trade secrets Say Symantec wants to publish. There are consequences: a) financial issues to make public 300 TB of data; b) fairness actions? issues with competitors; c) intellectual property rights. Sharing proprietary data versus priority and recognition: should a researcher share this data with others? Then she might loose the recognition or there might be ethical issues. Problem maintaining secrecy versus revealing research participants. Competitive advantage versus efficiency of research. Some solutions: a) government funding; b) give just access to data at the company site. Privacy: users have the right to preserve it. Examples of data: IP addresses, user online behavior. HIPAA: gives some guidelines on how to share data. Solutions rely technology (anonymization), but there are limitations. Sound experimentation: avoid error, get independent confirmation. We had problems about papers that give false solutions or some that only work for particular scenarios, especially those that make important affirmations and use prohibitive datasets. Therefore is good to have external review board for limiting research, and restrict access to on-site only. How to improve this: WINE model. Conclusion: Symantec want to provide incentive for sharing but there is a need for a better model.
Questions.Talk is about IRBs. Traces history to 1947 Nuremberg Code, 1972 Tuskegee Study, culminating in US Dept HHS Reg 45 CFR 46. Goal of IRBs is designed to avoid problems of saying "trust me".
IRB review principles include respect for persons, beneficence (do participants benefit from study), justice (e.g., are results distributed fairly across demographics).
Problems with IRB process
- Mandated by feds
- Formed in reponse to extreme events
- Process may appear arbitrary
- Very little evaluation of the effectiveness of the IRB process
This reminds her of the TSA, each of the above points apply. 2 questions: why make the claim, and what to do if it is true? Does the IRB understand the protocol well enough to really understand what the dangers might be?
Contribution: survey 40 IRBs at top 40 US universities (according to
US News). Checked whether the roster is online, and whether the users
have CS background.
- Only 17 list rosters online (surprising since regs require that they
have to maintain this list and send to US HHS
- Of these, 5 have CS representation. These are from schools that
have strong HCI departments.
In 2004, lots of accusations from journalism schools and oral history researchers of mission creep in IRBs and backlash.
In 2008, Garfinkel writes a paper on IRBs, while Allman wrote a paper about what program committees should do when the research may be unethical. Poses that maybe we should ask IRBs?
Examples of Maritza's IRB protocolsProviding some context to the talk: the general term for IRBs is human subjects review boards. Her talk is IRB-focused. Very few people serve on IRB boards. Dave Dittrich found that 4 of 200 people at NDSS serve on IRBs.
Key challenge is to quantify risks, so that researchers and IRBs can articulate these risks. She notes that many IRBs have slid into also considering university protection from legal harms, so it is no longer only about ethics.
As the distance between researcher and subject decreases, we are more likely to view as human subjects. Conversely, as distance increases, we are more likely to view as non-human subjects.
Also starting to see less concern about human subjects research and more concern with human-harming research.
Publicity of the data matters. Pay for access is viewed as less private. If a site is public, then that has greater distance?
Dave: What about stolen private data that is later published (ala Anonymous v. HBGary, Wikileaks)?
Most US models focus on minimizing, not eliminating, risks to research subjects. But what are the risks?
Dave Dittrich: Where does CS fit? Big question. Does computer security research fit in the IRB model?
Dave Dittrich is on an IRB, joined to understand the process. He has observed lots of ambiguous statements in IRB applications about data security statements.
Often copy boilerplate statements that were used in traditional studies where info is stored on paper in a vault onto electronic studies on e.g., surveymonkey, that use machines not under researcher control, but the same doesn't apply ("No others will have access to the data"). Even when more specific, they are not comprehensive (e.g., what is the password policy that protects the encrypted hard drive).
Elizabeth notes that the NSF is requiring that applicants must have a data plan. Maritza noted that the requirements talk about applying "community standards", but we don't have that in security yet.
Dave mentioned a "certificate of confidentiality", where in a physical environment you are studying illegal behavior, you get legal protection from having to turn over the data. But when data collection is remote (e.g., iphone app), then the certificate doesn't apply if the police arrest the participant and seize the evidence stored on the phone/PC.
Elizabeth notes the problem with terms of service on sites like surveymonkey and facebook.
Two mock IRB cases were discussed by the panel.
Case 1: conducting a research project on worm probes on a computer network. Collects network data automatically, is that personal data collection? Collects IP addresses, which may be personal. Final issue if you store the information on the cloud.
John McHugh: the issues are specific, so it depends on whether you need to link IP to customer address, or to ASN, or to other sources. He provocatively argues that all computer-generated data (network traces) is not personal.
John argues that for cloud storage, you should not store IPs.
http://security.harvard.edu/ outlines different levels of data sensitivity, and the security that would be required. Tiered access or consent models may be the way forward, for instance, asking for consent to store data in the cloud versus locally.
Second case study is about deploying a worm in a controlled environment. Risks are that the worm might get out and cause additional harm?
Nicolas: question is that often the worm designer is the leading expert and has no peers that can adequately review the proposal? Answer is that they could bring in outside expertise, but the question is how do you find the appropriate experts to do the review. Nicolas: worries that bringing in outside experts will reveal the idea to the expert, who denies and then steals the idea. Dave Dittrich: encourage the proposer to do a literature review and make a more thorough argument. This was then passed onto a peer for review, who was expected to remain confidential.
Also brought that if the event is truly closed, the real problems come on how it is disclosed.
John: we may be slightly ahead of the pack, but whether or not you
publish won't affect the eventual discovery of the attack methods.
Sven: sometimes in the past we did not disclose methods.
Finally, where should we go from here?
1. Pedagogy ( = Academy + Industry) on ethics
a. Slow institutional response to curricular needs and limited faculty experience. Pedagogical change only requires interested faculty. So a demand for better ethics education may come from many sectors, not only security.
b. In social sciences, the prof who teaches research methods course could be given a slot on the IRB, in order to expose them (and consequently students) on how they work and how to be ethical.
2. Promote ethics among professional associations.
a. This is slower, but could be useful for promoting standards once they have coalesced. Can even come up with a profession-wided "ethical clean bill of health" standards for publications, as the AMA does.
3. Use the government
a. Even slower method, but could use regulatory pressure to exact change. Current NSA curricular guidelines have no ethical education component, so we could try to change.
John worries that IRB approval process will cause people to avoid that research path. Dave's response is tough luck, we need to raise the bar. But David Robinson wonders if you can really raise the bar a little bit at all. A lot of the resistance in community is that moving to IRBs would be viewed as overkill. Rachel Greenstadt (Drexel) asked about the role of program committees. Others noted that there could be a preemptive effect if PCs do start to reject suspect papers.
Maritza says that she worries about inconsistency across and within IRBs and the negative effect that could have in preventing research.
Enforced Community Standards For Research on Users of the Tor Anonymity NetworkImpossible to detect passive manipulation in tor if you are a tor node. Chris worries about 3 classes of those studying tor: governments surveiling citizens, hackers who establish rogue servers, and academic researchers. What should we do about academic researchers who push the bounds by studying users in tor.
Two case studies of Tor. One was McCoy et al at PETS. Researchers did not seek out or obtain legal approval. Researchers did not receive warm welcome at PETS. Following bad press, the university launched an investigation into the ethics and legalities of Tor. After two days, the IRB cleared the researchers.
Second study was by Castelluccia et al (2010). They discovered that stolen session cookies gathered from open WiFi networks enabled a reconstruction of user search history. The authors wanted to assess the effectiveness of the attack. One part of this was to set up a rogue Tor server, where they collected data from the tor network, 1800 distinct users were observed. Concerns about this paper: the authors viewed the privacy of work colleagues as greater than the privacy of tor users.
Chris notes that the McCoy paper was designed to analyze tor traffic specifically, while Castelluccia used tor as a means of analyzing large amounts of network traffic, that they wouldn't otherwise have access to. Chris argues that unless something is done, then we should expect to see more research that uses tor.
While the reaction to McCoy et al at PETS was overwhelmingly negative, there is no real paper trail of the dissatisfaction. Ditto for Catelluccia et al. So as an outsider, you only see the publications and believe that it is being blessed by the community. We need to write down what the guidelines are otherwise they will be ignored. Goal of the talk is to kick off the discussion on how to set guidelines for using tor for research.
Claim is that snooping on tor users is worse than snooping on general public, because they have signaled their privacy interest by using tor. Ross wonders whether banning this is wise, because in other contexts, the IRBs and governance has been captured by research community. So we must be wary of setting up research-driven institutions to evaluate ethics in computer security because it will likely be captured.
George Danezis (Microsoft Research) made the point that in many non-US countries collecting data on tor may not be illegal, and furthermore that the researchers above were not from the US.
Chris's proposal: research should be focused on users of the tor network, not just a source of Internet traffic generally. Need to enforce the standard through PCs at conferences.
Tyler Moore - Ethical Dilemmas in Take-Down Research
(notes by Nicolas Christin)
Will talk about war stories from his research and ethical implications. Based on phishing studies (series of Moore/Clayton papers).
Take down of phishing sites are usually outsourced from banks to security/bank protection firms. 9 ethical questions:
1. Should researchers notify affected parties in order to expedite take-down? Sites can remain online for months at a time. Decided against notifying because they weren't sure whom to notify (banks? Site operators?) Also signed NDAs with take down operators, so couldn't really communicate anything. Clinical trial approach suggested: stop as soon as statistical significance is reached but do not interfere prior to that.
2. Should researchers intervene to assist victims? What should we do when we know possible identities of victims? (e.g., phished credentials stored in clear text at the miscreant site). Contacting affected users may be difficult and complex for the researchers and be very expensive and take a lot of time. Incentive for researchers to even looking into it. Dave Dittrich point: it should be expected that this kind of inconvenient data will be obtained and researchers should take that into consideration when starting that research. Rod Rasmussen story: maybe very difficult to act due to individual complications (sysadmin in Iraq, backup busy..., machine used as 911 dispatch no one wanted to touch it)
3. Should researchers fabricate plausible content to conduct "pure"
experiments of take-down? Most interesting research is observational,
so fabrication of reports to study take-down is usually
unethical.
Nicolas: questions whether testing through deception would
be unethical.
Tyler: orthogonal problem.
4. Should researchers collect world-readable from "private" locations? Example from Webalizer data - usually easy to obtain when people leave it in the default location (/Webalizer). They tried this in their study - and acquired that information to figure out traffic dynamics of phishing websites. Most people in the audience think it is dodgy - point made that it really depends on the type of data being collected. Roger makes point that referrers can reveal a lot. Angelos makes point that potential danger is in correlating several data sources.
5. What if our analysis will assist criminals? Eqv to suitability of full disclosure. Generally costs outweigh benefits.
6. Should investigatory techniques be revealed? When investigation revealed publically attackers can attack. Mr. Brain example: phishing kits had backdoors. Known by researchers (who didn't write about it) and law enforcement. Someone at netcraft published it in a blog post. Disrupted criminal investigation. George: says that this is actually a broader question, not limited to interfering with law enforcement. Interfered party may be different. Tyler: criminal enterprise important. George: moral judgment very difficult to do and is kind of the question at hand.
7. When should datasets be made public or kept secret? Example of
phishtank. Most other phishing reports kept secret. Banks don't like
phishtank because they're exposed. Compromised sites also exposed so
pushback on that side as well. Key measure: is the publication of that
information beneficial? (Utilitarian approach) Sites in phishtank
recompromised more slowly seems like here better outcome. Generalized
that to sharing data being a good thing to reduce phishers'
success. Balance harm with benefits.
Ethical Considerations of Sharing Data for Cybersecurity Research
Darren Shou (Symantec)
- Continuous sharing of data is a big need for research, but there are obstacles: data sets may become stale. But data is held by operators and not researchers.
- Existing data sharing. One way is to hire interns, but there is a limit to what you can achieve in a short time.
- Proprietary data can make or break a career, but this is unfair.
- One option is a data clearinghouse, but there is very little operator incentive to share. Existing data clearing houses include Predict, ISC SIE, DatCat and ITA (Internet traffic archive). Three factors for these: 1) does the clearinghouse generate data or rely on contributions, 2) preservation of the data over time, and 3) confirmation for data research. Unfortunately, much of the data is considered IP by operators. Fundamental problem with clearinghouse is factor #1.
- Ethical considerations of data sharing
- 1. Openness: difficulty of openness. When do you have to share (for IP concerns), but there is competitive advantage issue: the data is viewed as competitive advantage, and so sharing it would undermine business models. Other issue is that there is a financial cost to sharing data. On top of all this, there are publication issues. Examples of openness dilemmas: #1: sharing proprietary datasets versus priority and recognition. There is a tension between maintaining secrecy versus revealing research participants. Competitive advantage versus efficiency of research.
o He discusses some compromises. Regarding CA, we can share with small set of vetted third parties, who don't make entirely public.
- 2. Privacy
o One particular issue is that the most useful data comes from real users and networks.
o Need to consider what the value is to users from collecting the data.
o One example is what should you do with IP addresses for compromised hosts.
- 3. Sound experimentation
o Often it is difficult to establish meaningful comparisons without access to privately held data, such as the comprehensiveness of antivirus.
Solutions
1. External review board including industry and academics ala the PREDICT external relations board.
2. Restrict access to on-site only.
3. Operator data-hosting
Shou then proposes a model called WINE for exchanging information. Attempts to balance each of the competing ethical considerations. Have several PB of data, getting around TB. Claims that providing confirmation of research can have competitive value.
John McHugh - why not view aging as a benefit to counteract competitive advantage. Response: that's OK, but it would be better to share current data.
Has chosen to, for example, share telemetry data. But even that is problematic, what to do with IP addresses. Polo Chau used some graph mining for malware detection.
Panel: Moving forward, building an ethics community
Panel moderator: Erin Kenneally (UC San Diego/CAIDA/Elchemy)
Panelists: John McHugh (RedJack/UNC), Angelos Stavrou (George Mason
University), Ross Anderson (University of Cambridge), Nicolas Christin
(Carnegie Mellon University)
3-legged stool of ethics: principles, applications, and implementation. Key problem on principles is that there is a lack of shared community values. Researchers also lack guidance on ethical standards, as well as any enforcement capability. For any of this to work, we need incentives to make this work.
One thing we can fix as a community: the Menlo Report. One component of the Menlo report is called EIA (ethical impact assessment), a self-assessment tool to help researchers assess ethical impact along the lines of the privacy impact assessment (PIA). On enforcement, there is an opportunity to self-regulate.
Ross: was on sabbatical in Silicon valley, and there is an attempt to share security data across the industry. Current arrangements are chaotic, would be better if transparent and accountable. Ross will focus on the governance of such a sharing body. Any given attack may be of interest to 3 or 4 of the 12 big players involved. Data warehouses are non-starters since operators want to keep data under their control. So instead should there be a transnational body (ala CERT) or a single national body? How can you set up the governance rules to prevent industry capture? This is important because the MPAA might want to use the data sharing arrangement to enforce copyright. There is a serious conflict between laws and norms. Cannot simply call it "serious crime only" since IP violations carry greater prison sentences than computer misuse in the UK. One suggestion is that the body shouldn't be an outgrowth of existing efforts. A final warning from history: in the US wild west there was a shortage of sheriffs, so set up a Pinkerton commission to carry out private law enforcement. But 20 years later strikers were shot in Pittsburgh.
John McHugh: from talks today see lots of contradictions and disagreements about what is ethical and what is not. Either we are an adjunct to law enforcement in this area, or we need to back away from this research. What we cannot do is be a "cowboy" and go out and collect lots of personal data from a botnet. One problem is that we don't have a real vehicle to get involved with law enforcement when it would be useful to assist. If we are to do the "cowboy" things, we need to create a more explicit arrangement with law enforcement.
Chris: Problem is that we see the same people doing detection and enforcement. Claims that we need to enforce a stricter separation between the two roles. We need to develop ways to conduct research that is acceptable to our peers, but there is currently no consensus
Nicolas: Important not to only take a western world view. Most people today have taken a utilitarian view: helping more than hurting. But there are other approaches, even in the west. But if you go to the east, you have Buddhist ethics. It seems that there is conflict between Buddhist ethics and Western ones, so it is important to consider alternative ethical world views.
On Japanese one-click frauds case, he thought he was in the clear on a vigilante study where no PII was involved. But someone in Japan claimed it was unethical because he identified some service providers and embarrassed them, and it wasn't his place to displace law enforcement.
Moderator: speakers have attempted to operationalize ethics by institutionalizing research. But does that put the cart before the horse by not waiting for community standards and agreement to emerge.
Another question: do people agree that there is a lot of research that stays below the radar?
But what can community do to carry out enforcement? Sven: send a clear message to bad-apple researchers that what they are doing is wrong. We must clearly set the boundaries in order to carry out enforcement.
Roger. Incentives are perverse here. People are rewarded for publishing
unethical research.
Ross: people need to go to jail.
John: Sees a lot of wiggling here, but lots of ethical decisions do not have such wiggle room. Ethics are a more universal set of values than a particular list of rules. Bothered that professional societies would be given the task of making rules that are then subject to legalistic interpretations.
We are at a stage of trying to break up inertia, where tolerance of past misdeeds is continuing in the future. Roger: there is a growing trends of individuals taking actions as program chairs, but the problem is that paper submission is a repeated game and not all PCs take actions.
Rachel: has been talking with an anthropologist who believes ...
Ross: We have to be careful of using IRBs merely as ass-covering.
Moderator: important not to abrogate ethical responsibility by simply working with law enforcement.
Sven: can imagine a scenario where the police work with researcher and the police so that the researcher engages in legal activities that are not legal for police.
Maritza: why not publicly ostracize papers that fail the ethics test? Seems like a reasonable argument but the problem is that we don't have clear guidelines so the results must be public.
Moderator survey: would it be helpful to have a tool to judge ethical decisions?
Ross: issue of jurisdiction matters. Dave: goes beyond just access to data, and into access to criminological information.
Dave: big problem is that principles aren't agreed upon, and it isn't clear how this maps onto activities.
One problem with IRBs is that if you want to compare the risks and benefits, you need to be able to quantify them, which is hard to do for security research.
Rachel: one aspect of IRB is asking if you can carry out the study without causing harm. This step often gets skipped in security research because participating in attacks are cool.
Elizabeth: suggest that we start engaging with some IRB association. Also look at the AOIR document on ethics of Internet research, first created in 2002. WECSR 2011 concluded.