The 11th FIRST (Forum for Incident Response and Security Teams) International Conference was held June 13-18 in Brisbane, Australia. [If you are not familiar with FIRST, it is the international organization of CERTs, or Computer Emergency Response Teams, including teams from literally all over the world, representing universities, industry, defense organizations or even whole countries. A lot more info at www.first.org if you are interested.] The 1999 conference had a full day of tutorials, followed by 4 days of paper and panel presentations, with the FIRST annual general meeting in between. Among tutorials, an excellent one on Computer Forensics was offered by our hosts in Brisbane (a group of professors and researchers from the Queensland University of Technology, and members of the Information Warfare and law enforcement). The tutorial presented issues, current methods, and law aspects in computer forensics, concentrating on the practical "what should you do?" part of the story. A very interesting study done by the Information Security Research Center and the Faculty of Law at QUT looked at specific computer crime cases, what computer forensics activity was performed, what evidence was collected, preserved, then brought to court, and what was the outcome of the legal process. There is - at least in Australia - a huge difference in views on evidence admissibility between IT and law. The conclusion of the study was not the usual "we need more legislation" but instead "IT better gets its act together before going to court". The conference had addresses from the Minister of Justice and the Advisor on IT Security Policy, and the keynote speech was given by professor William Caelli (QUT). Consensus: Significant increase in attacks; Cooperation is required for successful defense; CERTs and FIRST role more important into 21st century. Among the topics discussed in papers and panels: -- Outsourcing security services and managing such services (SecureGate Ltd.): According to the presenters' experience, outsourcing security is a real option but security must be managed as a process (both at client and provider), not just as a one-time task. -- Insurability criteria for networks (Cisco): An insurance company needs to assess a network for coverage before establishing premiums and other terms of contracts, but no actuarial tables or other data exist so far. Outside security experts are needed to help determine risks. Given that the major focus for coverage is on revenue lost from system downtime, the paper presented a scale for assessing a network on DoS-related vulnerabilities that are identified within it. Beside exposure to existing vulnerabilities, the habits to stay current on newly discovered vulnerabilities must be taken into account. -- Intrusion Detection Services (IBM Global Services) - Experience from providing real-time intrusion detection as a service was presented. The underlying mechanism used by this service is network monitoring, with the "bad" packets replicated to an analyst / operator for signature categorization, then appropriate action is recommended or taken. Of course, users should have in place protection *and* ID, not just ID to witness intrusion. As an approach to liability, it had been mostly on a best-effort basis, now it is moving toward SLA-like (Service Level Agreement) arrangements. -- Automated incident reporting at CERT/CC (CERT/CC, ISS): At CERT/CC, there is no uniform way to report incidents (reports from teams come in all shapes, languages, and so on). As a result, incident data collection and processing are time-consuming and error-prone. Based on some of the current efforts in the incident reporting space - such as: CIDF (Common Intrusion Detection Framework), IDWG's IDEP (Intrusion Detection Exchange Protocol), AVE (Account Vulnerability Enumeration) project at Mitre for a common vocabulary - the paper proposed a Web-based incident reporting, with automated processing of text incident reports. -- Always-on devices - cable modems and ADSL modems (AT&T): The new high-speed access methods come with risks to users, providers, and the community. The PC's "Network Neighborhood" can extend to the physical neighborhood, and a static IP address *plus* and an always-on device make up a very easy target if additional *individual* protection is not in place at the user's home. Proper planning, provisioning and operation are essential for the provider, while a careful architecture of the service can prevent some (but not all) of the problems. -- Setting up a Policy Certification Authority (CERT-NL, DFN-CERT): This paper presented experiences at DFN and SURFnet in establishing a PCA, an organizational entity that provides third party services to the constituency. Among the good news, it is possible to set up an institutional CA for 10-to-50K users that can do reliable certificates at a cost of 50cents/cert! -- and of course Wietse Venema's (IBM) "Bugs per Amount of Code" paper. There were also a lot of lively discussions during the sessions and outside, several bofs, not to mention many exchanges of real-life experiences and information. Overall, a very good conference for all those involved in incident response security work.