You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
What allowances should be made for wanted/desirable cross-context recognition? Regrettably, as siloed defense of large platforms has improved, cross-platform threat models have become common. Specific situations where cross-context recognition is relevant for safety:
cross-platform threat models.
-- Example: an adult impersonating a minor solicits a minor on a large public platform, then vectors the discussion to an E2EE platform where criminal activity can occur in secret.
-- Example: analogous scenarios for fraud, malware.
-- Example: bypassing content moderation on platform A by exploiting inbound traffic from platform B.
legitimate law enforcement referrals.
intra-platform defenses stymied by the annihilation of signals potentially useful for cross-platform correlation.
When do nuisance advertising and the other harms of imperfect privacy justify disrupting the ability of safety efforts to deal with fraud, disinformation, malware, exploitation? How do we comprehend balance?
A basic problem with the line of thinking in this section is that it fails to differentiate the growth and revenue generating parts of online services from legitimate public safety efforts. I think we may need to develop a strategy that properly limits the data available for growth, while giving public safety work access to a larger set of data. Consider this IETF PEARG draft and the notion of replacement signals for counterabuse.
The text was updated successfully, but these errors were encountered:
I agree that it is important to leave out any mentions of nuisance advertising or similar annoyances. Annoying UX is not necessarily a privacy issue (although it can be, when dealing with it requires a person to do additional privacy labour.)
Legitimate public safety efforts are important, and in many cases those efforts will require improved privacy for service members and other key people. See Microtargeting as Information Warfare by Dr. Jessica Dawson.
privacy-principles/index.html
Line 707 in 0757d6f
What allowances should be made for wanted/desirable cross-context recognition? Regrettably, as siloed defense of large platforms has improved, cross-platform threat models have become common. Specific situations where cross-context recognition is relevant for safety:
-- Example: an adult impersonating a minor solicits a minor on a large public platform, then vectors the discussion to an E2EE platform where criminal activity can occur in secret.
-- Example: analogous scenarios for fraud, malware.
-- Example: bypassing content moderation on platform A by exploiting inbound traffic from platform B.
When do nuisance advertising and the other harms of imperfect privacy justify disrupting the ability of safety efforts to deal with fraud, disinformation, malware, exploitation? How do we comprehend balance?
A basic problem with the line of thinking in this section is that it fails to differentiate the growth and revenue generating parts of online services from legitimate public safety efforts. I think we may need to develop a strategy that properly limits the data available for growth, while giving public safety work access to a larger set of data. Consider this IETF PEARG draft and the notion of replacement signals for counterabuse.
The text was updated successfully, but these errors were encountered: