This month we have a book review about honeypots by Richard Austin, announcements from NIST, and many workshop and conference announcements.
The Security and Privacy Symposium in May of this year will feature two workshops: the second year of Web 2.0 Security and Privacy, and a newcomer to the symposium, Systematic Approaches to Digital Forensic Engineering. Workshops are a popular approach to widening the scope of the symposium, and more are possible if we have enough organizers to help with planning.
I was reflecting on the subjective nature of reviews of submitted papers to conferences, and I wonder if it is possible to evaluate reviewers in some way that is useful across conferences. Authors might be helped by knowing what confidence is placed in the reviewers. Are they experienced and generally helpful, or are they inexperienced and overly critical? My guess is that this knowledge would help authors make sense of the reviewing process, and it should lead to better reviews.
An innovation that the SP Symposium used for a couple of years has turned out to have flaws. Authors were invited to submit short papers to the conference, and this helped in getting interesting but not fully developed research directions presented to the audience. However, the process caused confusion and sometimes resentment with authors because they were not sure how much prestige would go along with a short paper, and they were further concerned about the ability to publish follow-on work. Surely there is some logical way to deal with the concerns and allow short papers into top-rate venues? Let your technical committee members and program chairs know if you have any ideas on this.
Remember firewalls may not be as high as they appear,