Any virtual event platform that enjoys substantial usage can become a target for attack, trolling, disruption, and surveillance. Numerous examples of such abuses were documented in March 2020, fueling a sudden concern over the problem and the coinage of the word “zoombombing” (which occurs when miscreants take over and misuse a publicly accessible video conference—for example, by projecting a desktop containing objectionable material).
Many new videoconferencing users are not trained in using these technologies or in underlying
principles of online security and privacy. In most cases, adoption is taking place quickly and out
of necessity, without much opportunity to consider important issues such as security training,
threats to privacy, impacts on vulnerable communities, or laws such as the European Union’s
General Data Protection Regulation (GDPR) and the US Family Educational Rights and Privacy
Act (FERPA).
In some cases, platform features can imply a level of privacy that is not truly supported. For
example, messages marked as private between attendees may appear in chat logs available to
hosts, without the knowledge of participants. Participants may believe that virtual backgrounds
will obscure private details on their surroundings, but the image process technology supporting
virtual backgrounds can allow momentary views of the real background that can be isolated and
examined in a recording.
These are important issues and we are glad to see companies like Zoom making rapid strides to
address them. All in all, it is our opinion that these platforms, if properly configured, are
appropriate for the virtual conference use case.
Virtual conferences often focus on content that is considered a “publication” suitable for
wider dissemination, even if the conference itself is restricted to registered participants. As
these platforms are pressed into service for new everyday uses (university classes, virtual
conferences, religious services, birthday parties, department meetings, weddings, medical
appointments, psychotherapist appointments, and even high-level government meetings), it
is important to consider the additional security, privacy, and legal implications implied by
these settings. Indeed, even for virtual conferences, as these platforms are pressed into
service to facilitate more personal interaction between attendees, these issues are of
increasing importance.
Large gatherings of people offer attractive targets for deliberate attempts at disruption, trolling, and other attacks. Conference organizers and platform developers should view their offerings ahead of time through the lens of possible disruptions. Many platforms offer controls such as “Mute all microphones” or “block this participant.” However it is difficult for the presenter to manage these controls in real time without interrupting the presentation. This is one of many reasons to have other volunteers/staff in charge of that aspect and to actively assign co-host privileges ahead of time to those people. Consider creating a specific “Security Officer” position for this task.
It is also prudent to have an explicit Code of Conduct that sets down rules for participants and the types of actions that will be taken when violated. Consider including things like a real name policy for attendees (similar to wearing a badge) and guidelines for whether it is acceptable to take screenshots or record other participants (e.g., recording not allowed, or allowed for personal use but not for further distribution). IEEE VR 2020 pre-registered all attendees in Slack and Mozilla Hubs with the full name they used for registration; in Hubs, attendees could change their nickname, but their real name was also visible and could not be changed. “Reception” areas may be considered where participants arrive and get screened, briefed, have audio checks, etc., before being moved to the main meeting. Beyond these specifics, consider testing your platform with an eye toward attendees being as disruptive as possible and then building in defenses for these kinds of disruptive actions. For example, what would happen if an attendee attempted to impersonate a respected member of the community, or if an attendee posted offensive comments or links to malware in the chat? What if many attendees did this at the same time?
Appropriate defenses may include both prevention strategies and support for investigation, response, and censure after the fact. Above, we pointed out that virtual conferences will need at least as many volunteers as a physical conference; indeed, depending on how the virtual spaces are organized, they may need many more. When there are many small virtual spaces, be they text channels or 3D environments, moderators become increasingly important as these events become more open to the public. For example, IEEE VR 2020 had almost double the number of volunteers as would have been normal for the physical event, and they were still stretched thin as they monitored and moderated Slack, Sli.do, and Hubs.