Merge pull request #684 from PurpleI2P/openssl recent changes
Merge pull request #684 from PurpleI2P/openssl recent changes
Merge pull request #683 from vaygr/openbsd-build fixed build with OpenBSD
Merge branch 'openssl' into openbsd-build
fixed build with OpenBSD
I have received a gazillion press requests, but I am traveling in Australia and Asia and have had to decline most of them. That's okay, really, because we don't know anything much of anything about the attacks.
If I had to guess, though, I don't think it's China. I think it's more likely related to the DDoS attacks against Brian Krebs than the probing attacks against the Internet infrastructure, despite how prescient that essay seems right now. And, no, I don't think China is going to launch a preemptive attack on the Internet.
Interesting article listing the squid species that can still be ethically eaten.
The problem, of course, is that on a restaurant menu it's just labeled "squid."
As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.
EDITED TO ADD: By "ethically," I meant that the article discusses which species can be sustainably caught. The article does not address the moral issues of eating squid -- and other cephlapods -- in the first place.
correct stream termination
With a new school year underway, concerns about student privacy are at the forefront of parents’ and students’ minds. The Student Privacy Pledge, which recently topped 300 signatories and reached its two-year launch anniversary, is at the center of discussions about how to make sure tech and education companies protect students and their information. A voluntary effort led by the Future of Privacy Forum and the Software and Information Industry Association (SIAA), the Pledge holds the edtech companies who sign it to a set of commitments intended to protect student privacy.
But the Student Privacy Pledge as it stands is flawed. While we praise the Pledge’s effort to create industry norms around privacy, its loopholes prevent it from actually protecting student data.
The real problems with the Student Privacy Pledge are not in its 12 large, bold commitment statements—which we generally like—but in the fine-print definitions under them.
First, the Pledge’s definition of “student personal information” is enough to call into question the integrity of the entire Pledge. By limiting the definition to data to that is “both collected and maintained on an individual level” and “linked to personally identifiable information,” the Pledge seems to permit signatories to collect sensitive and potentially identifying data such as search history, so long as it is not tied to a student’s name. The key problem here is that the term “personally identifiable information” is not defined and is surely meant to be narrowly interpreted, allowing companies to collect and use a significant amount of data outside the strictures of the Pledge. This pool of data potentially available to edtech providers is more revealing than traditional academic records, and can paint a picture of students’ activities and habits that was not available before.
By contrast, the federal definition, found in FERPA and the accompanying regulations, is broad and includes both “direct” and “indirect” identifiers, and any behavioral “metadata” tied to those identifiers. The federal definition also includes “Other information that, alone or in combination, is linked or linkable to a specific student that would allow a reasonable person in the school community, who does not have personal knowledge of the relevant circumstances, to identify the student with reasonable certainty.”
Second, the Pledge’s definition of “school service provider” is limited to providers of applications, online services, or websites that are “designed and marketed” for educational purposes.
A provider of a product that is marketed for and deployed in classrooms, but wasn’t necessarily “designed” for educational purposes, is outside the Pledge. This excludes providers while they’re providing “general audience” apps, online services and websites. We alleged in our FTC complaint against Google that the Pledge does apply to data collection on “general audience” websites when that data collection is only possible by virtue of a student using log-in credentials that were generated for educational purposes. However, SIIA, a principal developer of the Pledge, argued to the contrary and said that the Pledge permits providers to collect data on students on general audience websites even if students are using their school accounts.
The Pledge’s definition also does not include providers of devices like laptops and tablets, who are free to collect and use student data contrary to the Pledge.
Simple changes to the definitions of “student personal information” and “school service provider”—to bring them in line with how we generally understand those plain-English terms—would give the Pledge real bite, especially since the Pledge is intended to be legally enforced by the Federal Trade Commission.
While enforcement only applies to companies who choose to sign on, we think that if the Student Privacy Pledge meant what it said, and if signatories actually committed to the practices outlined under the heading “We Commit To”, it would amount to genuine protection for students. But with the definitions as they stand, the Pledge rings hollow.
Notwithstanding the need to improve the definitions, the Pledge could do some good. Unfortunately, the FTC has yet to take action on our complaint alleging that Google violated the Student Privacy Pledge. We urge the Commission to take this matter seriously so that parents and students can trust that when companies promise to do (or not do) something, they will be held accountable.
As the school year continues, the conversation about education technology and student privacy is more important than ever. Tell us about your experience in your own schools and communities by taking our student privacy survey.
Merge pull request #680 from vaygr/libressl-support fixed build with LibreSSL
fixed build with LibreSSL
Merge pull request #679 from BOPOHA/openssl fix paths
Merge pull request #678 from l-n-s/move_rpm_files move rpm-related files to contrib folder
move rpm-related files to contrib folder
Merge pull request #677 from BOPOHA/patch-1 fixed Centos 7 notes
fixed Centos 7 notes
Merge pull request #676 from BOPOHA/openssl added spec and service files
Obama: Traditionally, when we think about security and protecting ourselves, we think in terms of armor or walls. Increasingly, I find myself looking to medicine and thinking about viruses, antibodies. Part of the reason why cybersecurity continues to be so hard is because the threat is not a bunch of tanks rolling at you but a whole bunch of systems that may be vulnerable to a worm getting in there. It means that we've got to think differently about our security, make different investments that may not be as sexy but may actually end up being as important as anything.
What I spend a lot of time worrying about are things like pandemics. You can't build walls in order to prevent the next airborne lethal flu from landing on our shores. Instead, what we need to be able to do is set up systems to create public health systems in all parts of the world, click triggers that tell us when we see something emerging, and make sure we've got quick protocols and systems that allow us to make vaccines a lot smarter. So if you take a public health model, and you think about how we can deal with, you know, the problems of cybersecurity, a lot may end up being really helpful in thinking about the AI threats.
added spec and service files
portable windows data directory
fixed #675. I2LUA define
Lance Spitzner looks at the safety features of a power saw and tries to apply them to Internet security:
By the way, here are some of the key safety features that are built into the DeWalt Mitre Saw. Notice in all three of these the human does not have to do anything special, just use the device. This is how we need to think from a security perspective.
- Safety Cover: There is a plastic safety cover that protects the entire rotating blade. The only time the blade is actually exposed is when you lower the saw to actually cut into the wood. The moment you start to raise the blade after cutting, the plastic cover protects everything again. This means to hurt yourself you have to manually lower the blade with one hand then insert your hand into the cutting blade zone.
- Power Switch: Actually, there is no power switch. Instead, after the saw is plugged in, to activate the saw you have to depress a lever. Let the lever go and saw stops. This means if you fall, slip, blackout, have a heart attack or any other type of accident and let go of the lever, the saw automatically stops. In other words, the saw always fails to the off (safe) position.
- Shadow: The saw has a light that projects a shadow of the cutting blade precisely on the wood where the blade will cut. No guessing where the blade is going to cut.
Safety is like security, you cannot eliminate risk. But I feel this is a great example of how security can learn from others on how to take people into account.
Having for years enforced a constitutionally offensive border search regime at physical borders and U.S. international airports, Customs and Border Protection (CBP) recently proposed to expand its violations in troubling new ways by prompting travelers from countries on the State Department’s Visa Waiver Program list to provide their “social media identifier.” Mounting criticism recently prompted the agency to commit to some useful limits, but the proposal remains flawed.
Recently joining the ranks of diverse critics is the U.N. Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, who wrote to the U.S. Ambassador at the end of September.
EFF submitted several sets of comments expressing our concerns with the proposal, beginning during the initial comment period. After CBP extended the original comment period until the end of September, the agency received comments from thousands of users opposing its ill-considered and counter-productive policy. It issued a preliminary response to those initial comments, to which we replied in a follow up analysis noting the proposal's continuing defects. We also joined coalition comments compiled by the Center for Democracy & Technology, as well as a second set of coalition comments organized by the Brennan Center for Justice in response to a DHS notice required by the Privacy Act.
Violating international law
The international community has also grown outspoken. An important new voice joined the debate at the end of September, when David Kaye, the U.N. Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, wrote to remind our government that international law protects everyone’s “right to maintain an opinion without interference and to seek, receive and impart information and ideas of all kinds, regardless of frontiers,” in addition to “the right of individuals to be protected…against arbitrary…interference with their privacy and correspondence.”
Mr. Kaye’s letter also reiterates the necessary and proportionate principles developed in 2013 by a global coalition of civil society, privacy, and technology experts (including EFF) and endorsed by over 600 organizations and a quarter million individuals around the world. It goes on to challenge CBP’s proposal for half a dozen reasons, including the vagueness that has concerned EFF. In particular, Mr. Kaye notes that:
It is unstated whether (and under what circumstances) officers may request additional information or access to private accounts. It is also unclear whether officers can request or persuade travelers who have left the data field blank to provide information, or whether they would be questioned about why they left the field blank.
Of course, social media profiles can reveal an immense amount of personal details about an individual. Many social media users share sensitive information online intended for friends and family that they would not share with their (or a foreign) government.
Chilling speech and expression
Our allies at ACCESS NOW have noted that “A person’s online identifiers are gateways into an enormous amount of their online expression and associations, which can reflect highly sensitive information about that person’s opinions, beliefs, identity, and community.” As my colleague Sophia Cope wrote in August:
[S]ocial media handles...can easily lead the government to information about [a traveler's] political leanings, religious affiliations, reading habits, purchase histories, dating preferences, and sexual orientations, among other things. Moreover, given the highly networked nature of social media, the government would also learn such personal details about travelers’ family members, friends, professional colleagues, and other innocent associates, many of whom may be U.S. citizens and/or residents with constitutional and statutory rights.
Travelers accustomed to political repression in their own countries may, as the U.N. special rapporteur noted, inhibit their own expression to avoid scrutiny during anticipated future travel. So, too, will Americans: knowing that an international friend's decision to answer CBP’s proposed question could compromise our own opportunity for anonymous speech, as well as associations, many Americans—especially those familiar with our country’s history of suppressing dissent—may rationally decide to limit their online speech to avoid controversial topics that might invite scrutiny.
Such constitutionally offensive chilling effects are established and predictable in the face of documented surveillance and even more likely given the troubling history of U.S. federal authorities excluding visitors for ideological reasons. CBP's proposal to ask visitors to disclose their social media handles undermines the Obama administration’s written commitment to reverse this policy in order to allow Americans to hear diverse views.
Undermining the privacy of Americans
CBP and DHS formulated the proposed policy in the face of longstanding criticism for their domestic programs monitoring social media activity, to which executive officials have recently re-committed their agencies. Our comments joining the Brennan Center, in particular, highlight how CBP's current proposal would further impact the rights of Americans since the proposal would enable CBP to map relationships between visitors and their U.S. contacts and then share information gleaned from the social media profiles of those U.S. residents with other agencies potentially poised to monitor them.
Collecting data on the social media profiles of international travelers could also exacerbate longstanding domestic concerns in other ways.
Only two years ago, the Supreme Court held in Riley v. California that cell phones are not subject to search incident to arrest absent a judicial warrant. In other words, even when an arrest is justified by probable cause that a person has committed a criminal offense, police must receive permission from a neutral arbiter, supported by a separate showing of probable cause, before searching the arrestee’s cell phone.
Yet at the border, DHS already violates the spirit of Riley in ways that this proposal could intensify. First, CBP has long claimed the power to search any electronic device crossing a U.S. border—including those belonging to U.S. citizens—for any reason at all, even without the individual suspicion long required to pat down a suspect within the U.S. or the the judicial warrant required by Riley.
Government lawyers who argued Riley conceded that the power to seize a phone from someone arrested within the U.S. did not justify accessing data—like social media profiles—stored in the cloud (e.g., by tapping on the Facebook app). If, however, CBP collects social media information at U.S. borders from WVP travelers (and through them, their U.S. contacts) it could enable the government to do what in Riley it conceded it could not: access data about Americans stored in the cloud without first gaining a judicial warrant.
Put another way, learning the social media accounts of travelers would expand the government's reach beyond data already gathered from devices and could allow agencies to circumvent legal limits that protect the privacy of Americans within the U.S.
Limits acknowledging some concerns
Our original comments expressed concern that CBP proposed to characterize as optional a question posed in an inherently coercive setting and invite travelers to reveal private and sensitive information by posing that question in a vague way.
Fortunately, CBP issued a statement repudiating a previous draft of its form on which its proposed question appeared as compulsory. The agency said it will make clear that “Providing this information will be voluntary. If an applicant chooses to not fill out or answer questions regarding social media, the ESTA application and I-94W can still be submitted.”
Its most recent statement also commits that "CBP will not violate any social media privacy settings in processing ESTA applications." This pledge is especially important given the agency's established practice of arbitrarily seizing devices at borders and airports, with which the government could conceivably not only access the known social media profiles of travelers but even potentially commandeer them.
On the one hand, we are proud of having helped compel CBP to accept reasonable limits.
Continuing constitutional defects
On the other hand, CBP's proposal remains flawed and continues to suffer from constitutional defects.
Sophisticated travelers may recognize that “information associated with your online presence” such as a “social media identifier” could be limited to a handle or pseudonym used to identify oneself on a particular social network. Some, however, may go further and provide multiple identifiers, or possibly even their passwords, enabling the government to potentially access private content. CBP should clarify how it will treat information provided by travelers and establish strict parameters to prevent misuse.
Morever, CBP admits that it will share data collected through its new question with other agencies "that have a need to know the information to carry out their national security, law enforcement, immigration, or other homeland security functions." This fails to address the concerns that we and others—including the U.N. Special Rapporteur—have raised about the proposal's chilling effects on expression.
Not only will travelers potentially silence themselves in their home countries to avoid prompting scrutiny when traveling to the U.S., but CBP's proposal may lead Americans to seek fewer international relationships with contacts through whom our own information could be exposed. It could also lead other countries to reciprocally demand personally identifying information from Americans seeking to enter their countries, driving a race to the bottom.
Perhaps most dangerously, the proposal omits any indication of how social media profiles will be evaluated or the process through which a traveler could be identified as a security risk. These standards must be articulated in advance to limit individual discretion and prevent ideological profiling of the sort that has long limited the rights of Americans to hear unpopular views.
Even after CBP recently articulated its limits, the proposal remains flawed. It undermines international law, individual rights, the rights of Americans both to hold and to hear unpopular views, and the Obama administration’s foreign policy to promote freedom of expression.
Having filed our comments alongside thousands of other critics, we hope that concerns from both Americans and the international community will spur the administration to reject CBP’s speech-suppressing proposal. Concerned readers can amplify our concerns by prompting their congressional representatives—especially those on the Senate and House Homeland Security committees—to write their own letters seeking answers from DHS and CBP.
Before all of this ever went down
In another place, another town
You were just a face in the crowd
You were just a face in the crowd
Out in the street walking around
A face in the crowd
If we don’t speak up now, the days when we can walk around with our heads held high without fear of surveillance are numbered. Federal and local law enforcement across the country are adopting sophisticated facial recognition technologies to identify us on the streets and in social media by matching our faces to massive databases.
We knew the threat was looming. But a brand new report from the Georgetown Law Center for Privacy and Technology indicates the problem is far worse than we could’ve imagined. The researchers compare the use of facial recognition to a perpetual line-up, where everyday, law-abiding citizens are pulled into law enforcement investigations without their consent and, in many cases, without their knowledge.
The researchers sent more than 100 public records requests to police agencies. Among their findings:
In response to the report, EFF has joined a large coalition of privacy advocates to demand the U.S. Department of Justice, Civil Rights Division take two major steps to keep facial recognition in check:
1. Expand ongoing investigations of police practices and include in future investigations an examination of whether the use of surveillance technologies, including face recognition technology, has had a disparate impact on communities of color; and
2. Consult with and advise the FBI to examine whether the use of face recognition has had a disparate impact on communities of color.
The problem isn’t just the police but also an aggressive push by biometric tech vendors who downplay the accuracy issues while marketing the systems as crucial to contemporary policing. The danger that facial recognition poses to our privacy and civil liberties is real and immediate. While we do give up a small amount of privacy when we walk around in public, we must preserve our ability to blend in as just a face in the crowd.
Read the Georgetown Law report on facial recognition: The Perpetual Line-Up: Unregulated Police Face Recognition in America.
Harvard researcher Yarden Katz has just published some fascinating findings on which universities have sold patents to notorious patent-holding company Intellectual Ventures (IV). Of the nearly 30,000 active patents that IV lists publicly, 470 of them were originally assigned to universities—a total of 61 institutions.
Katz explains how he arrived at these numbers:
How many of IV’s patents came from universities?
To answer this, I have scraped the names of the original assignees for each of the U.S. patents in the portfolio from patent records (see annotated patents list). The analysis shows that nearly 500 of IV’s patents originally belonged to universities, including state schools.
Katz found some other surprises in IV’s portfolio, including nearly 100 patents from the U.S. Navy.
If you know nothing else about patent trolls, you’ve still probably heard the name Intellectual Ventures before. IV is one of the largest patent trolls in the world and has been behind many of the most egregious cases of litigation abuse. Earlier this year, we wrote about IV suing a florist over its patent on crew assignments. For many years, it has tried to cultivate relationships with American universities so it can add their patents to its portfolio.
As we’ve discussed here before, over 100 universities have endorsed a set of principles for university patenting practices. Among other points, it suggests that universities should require that licensees “operate under a business model that encourages commercialization and does not rely primarily on threats of infringement litigation to generate revenue.” Unfortunately, a number of those institutions appear not to be living up to this principle.
From Katz’s post:
Both the University of California and Caltech signed the 2007 statement, yet IV now owns tens of patents from these schools that were filed after 2007. For instance, the IV portfolio includes a Caltech patent filed in 2010 (granted in 2011) and University of California patent filed in 2008 (granted in 2014). Other universities that signed the statement, such as Stanford, Harvard and MIT, did not have patents in the portfolio.
When universities sell patents to trolls, it directly undermines the role that they play as engines of innovation: the more patents trolls hold in a certain area of technology, the more dangerous that field is for innovators. The licensing decisions that universities make today will strengthen or sabotage the next generation of inventors. That’s why we encourage everyone to speak out: students, faculty, alumni, parents, and community members. These policies affect all of us.
If you’d like to see universities pledge not to partner with trolls, then take a moment to tell your university. We’ve designed our petition to make it easy to share the results with university leadership. For example, here are all of the signatories affiliated with the University of California-Berkeley. We’re eager to work with local organizers to help you make sure that your institution hears your voice.
We’ve fought patent trolls in the courts and advocated for laws that bring fairness to the patent system. Universities are the next battleground. Together, we can stop the flow of university inventions into the hands of bad actors.
Former NSA attorneys John DeLong and Susan Hennessay have written a fascinating article describing a particular incident of oversight failure inside the NSA. Technically, the story hinges on a definitional difference between the NSA and the FISA court meaning of the word "archived." (For the record, I would have defaulted to the NSA's interpretation, which feels more accurate technically.) But while the story is worth reading, what's especially interesting are the broader issues about how a nontechnical judiciary can provide oversight over a very technical data collection-and-analysis organization -- especially if the oversight must largely be conducted in secret.
From the article:
Broader root cause analysis aside, the BR FISA debacle made clear that the specific matter of shared legal interpretation needed to be addressed. Moving forward, the government agreed that NSA would coordinate all significant legal interpretations with DOJ. That sounds like an easy solution, but making it meaningful in practice is highly complex. Consider this example: a court order might require that "all collected data must be deleted after two years." NSA engineers must then make a list for the NSA attorneys:
- What does deleted mean? Does it mean make inaccessible to analysts or does it mean forensically wipe off the system so data is gone forever? Or does it mean something in between?
- What about backup systems used solely for disaster recovery? Does the data need to be removed there, too, within two years, even though it's largely inaccessible and typically there is a planned delay to account for mistakes in the operational system?
- When does the timer start?
- What's the legally-relevant unit of measurement for timestamp computation -- a day, an hour, a second, a millisecond?
- If a piece of data is deleted one second after two years, is that an incident of noncompliance? What about a delay of one day? ....
- What about various system logs that simply record the fact that NSA had a data object, but no significant details of the actual object? Do those logs need to be deleted too? If so, how soon?
- What about hard copy printouts?
And that is only a tiny sample of the questions that need to be answered for that small sentence fragment. Put yourself in the shoes of an NSA attorney: which of these questions -- in particular the answers -- require significant interpretations to be coordinated with DOJ and which determinations can be made internally?
Now put yourself in the shoes of a DOJ attorney who receives from an NSA attorney a subset of this list for advice and counsel. Which questions are truly significant from your perspective? Are there any questions here that are so significant they should be presented to the Court so that that government can be sufficiently confident that the Court understands how the two-year rule is really being interpreted and applied?
In many places I have separated different kinds of oversight: are we doing things right versus are we doing the right things? This is very much about the first: is the NSA complying with the rules the courts impose on them? This is about the first kind. I believe that the NSA tries very hard to follow the rules it's given, while at the same time being very aggressive about how it interprets any kind of ambiguities and using its nonadversarial relationship with its overseers to its advantage.
The only possible solution I can see to all of this is more public scrutiny. Secrecy is toxic here.
Award-winning journalist Amy Goodman won an important victory for press freedom yesterday, but given alarming new comments made by her prosecutor, it may be short lived.
On Monday, a judge quickly dismissed an absurd ‘riot’ charge brought against her by North Dakota authorities that stemmed from her coverage on Democracy Now of a violent attack on Dakota Access Pipeline protesters. But apparently, prosecutors don’t plan on dropping their investigation into her. They announced they may charge her again and indicated they want her unaired footage.
The New York Times reported on Monday evening:
[Goodman] and her lawyers declared victory on Monday, but Ladd Erickson, a state prosecutor who is assisting the Morton County state’s attorney’s office in the case, said other charges were possible.
“I believe they want to keep the investigation open and see if there is any evidence in the unedited and unpublished videos that we could better detail in an affidavit for the judge,” he said via email. “The Democracy Now video that many people have seen doesn’t have much evidence value in it.”
As we reported last week, the prosecutors first issued an arrest warrant for Goodman back in September for “criminal trespassing,” while indicating she was not entitled to any protections as a journalist because they claimed, “Everything she reported on was from the position of justifying the protest actions."
Then, the prosecutors admitted that there were "legal issues with proving the notice of trespassing requirements in the statute," i.e. they knew they couldn’t prove their case. So they dropped the trespassing allegation and instead charged her with participating in a ‘riot,’ which was dismissed Monday.
So even though they’ve been stymied twice by the law, they’re now thinking of returning for round three. It’s clear that the prosecutors have pre-determined Goodman should be in jail, and now just have to figure out how—statutes and First Amendment be damned.
Worse, they are now claiming they need Democracy Now’s unaired and unedited footage from the event to bolster their supposed “case.” They didn’t say how they plan on getting that footage, but one can assume they will need subpoena it. North Dakota has a strong reporter’s shield law that should protect Goodman and Democracy Now from turning it over, but given how much the prosecutors have disregarded the law so far, there’s no indication they’ll stop now.
North Dakota’s unconstitutional pursuit of Goodman is beyond an embarrassment at this point. It could not be clearer that Goodman was doing her job as a journalist and exercising her First Amendment rights to report on newsworthy events. These prosecutors should be sanctioned or fired for misconduct, and the Justice Department should consider investigating Morton County state’s attorney’s office if they continue their investigation.
In the meantime, North Dakota state's attorneys office should order its prosecutors to immediately cease trying to invent new ways to length stifle free speech and chill press freedom. It is dangerous to every journalist in the country.
sequential LeaseSet request
Tor 0.2.8.9 backports a fix for a security hole in previous versions of Tor that would allow a remote attacker to crash a Tor client, hidden service, relay, or authority. All Tor users should upgrade to this version, or to 0.2.9.4-alpha. Patches will be released for older versions of Tor.
You can download the source from the Tor website. Packages should be available over the next week or so.
Below is a list of changes since 0.2.8.8.
Tor 0.2.9.4-alpha fixes a security hole in previous versions of Tor that would allow a remote attacker to crash a Tor client, hidden service, relay, or authority. All Tor users should upgrade to this version, or to 0.2.8.9. Patches will be released for older versions of Tor.
Tor 0.2.9.4-alpha also adds numerous small features and fix-ups to previous versions of Tor, including the implementation of a feature to future- proof the Tor ecosystem against protocol changes, some bug fixes necessary for Tor Browser to use unix domain sockets correctly, and several portability improvements. We anticipate that this will be the last alpha in the Tor 0.2.9 series, and that the next release will be a release candidate.
You can download the source from the usual place on the website. Packages should be available over the next several days. Remember to check the signatures!
Please note: This is an alpha release. You should only try this one if you are interested in tracking Tor development, testing new features, making sure that Tor still builds on unusual platforms, or generally trying to hunt down bugs. If you want a stable experience, please stick to the stable releases.
Below are the changes since 0.2.9.3-alpha.
When EFF launched a campaign last year to encourage the public to help us uncover police use of biometric technology, we weren’t sure what to expect. Within a few weeks, however, hundreds of people joined us in filing public records requests around the country.
Ultimately, dozens of local government agencies responded with documents revealing devices capable of digital fingerprinting and facial recognition, while many more reported back—sometimes erroneously—that they hadn’t used this technology at all. Several, however, either didn’t respond, demanded exorbitant fees, or outright rejected the requests.
EFF has now joined the ACLU of Minnesota in filing an amicus brief [.pdf] in a particularly egregious case now before the Minnesota Court of Appeals, demanding the release of emails regarding the Hennepin County Sheriff Office’s facial recognition program.
In August 2016, web engineer and public records researcher Tony Webster filed a request based on EFF’s letter template with Hennepin County, a jurisdiction that includes Minneapolis, host city of the 2018 Super Bowl. He sought emails, contracts, and other records related to the use of technology that can scan and recognize fingerprints, faces, irises, and other forms of biometrics.
Hennepin County resisted the request, so Webster lawyered up. In April, a judge ruled in Webster’s favor. As the Minneapolis Star-Tribune reported:
In an April 22 order, Administrative Law Judge Jim Mortenson described four months of unexplained delays, improperly redacted records, inadequate answers and other behavior by county officials in response to Webster’s request.
The county’s actions violated the Minnesota Government Data Practices Act (MGDPA), Mortenson found. He fined the county $300, the maximum allowed by law; ordered it to pay up to $5,000 in Webster’s attorney’s fees; refund $950 of the filing fee, and pay $1,000 in court costs.
Perhaps most significant, he ordered the county to figure out a way to make its millions of e-mail messages publicly accessible by June 1.
It was a huge victory for Webster. But Hennepin County appealed, and thus a skirmish over biometric records has become a crucial battleground over the public’s right to access the emails of government officials across the state of Minnesota.
Biometric technology is an emerging threat to privacy. By biometrics, we mean the physical and behavioral characteristics that make us unique, such as our fingerprints, faces, irises and retinas, tattoos, gaits, and more. Police around the country have begun adopting and testing these systems. The devices are often mobile, such as handheld devices or smart phone apps.
Some emails that Webster already received show that the Hennepin County Sheriff’s office is contemplating using facial recognition technology on still images in investigations. Even more concerning, there’s evidence that in the next two years the sheriff intends to use real-time facial recognition to identify people in surveillance cameras streams, including those owned by private entities.
The records obtained show that jail inmates had their mugshots enrolled in a system designed by the German firm Cognitec. One particular email showed how the $200,000 system poses a threat to the privacy of individuals not involved in crimes and presents a significant financial burden on taxpayers. As a criminal informational analyst with the Hennepin County Sheriff’s Office wrote:
"[The] system is so good I’ve found possible matches that turned out to be close relatives…It costs a shit-ton … but I love it.”
In our brief, we draw from the wealth of records EFF, and our partners at MuckRock News, received through the crowdsourcing campaign to explain why these emails are key to informing the public debate over mobile biometrics.
As EFF Senior Staff Attorney Jennifer Lynch, ACLU of Minnesota Legal Director Teresa Nelson, and the Stinson Leonard Street law firm write in the brief:
Using the documents released in response to these requests, EFF has been able to report on nine agencies using biometric technology in California. The documents revealed that most of the agencies are using digital fingerprinting devices, and many are also using iris, palm, and facial recognition technology, or plan to use them in the future. One of EFF's partner organizations used these same records to map the ties between the biometric contractors mentioned in the documents and firms in the defense and security industries that are deeply embedded in the national security apparatus. EFF is continuing to review records released by other agencies.
The brief also explains how emails often contain some of the most important information:
Emails released to other requesters have been equally revealing. For example, emails released by Miami-Dade County, Florida showed how MorphoTrak, a large biometrics vendor serving forty-two states' DMVs and many federal agencies, underpriced the devices in its invoices but increased the price later. Emails between the Phoenix, Arizona Police Department and its vendor revealed information about the sole-source procurement process. And emails released by the Polk County, Florida Sheriff's Office describe the timeline for installing biometrics devices in squad cars and outline the training process for using the devices.
EFF hopes the appellate judges recognize that democracy functions best when the public debate is informed by government records and deny Hennepin County’s attempts to shield its emails from scrutiny.
This is a harrowing story of a scam artist that convinced a mother that her daughter had been kidnapped. More stories are here. It's unclear if these virtual kidnappers use data about their victims, or just call people at random and hope to get lucky. Still, it's a new criminal use of smartphones and ubiquitous information.
Reminds me of the scammers who call low-wage workers at retail establishments late at night and convince them to do outlandish and occasionally dangerous things.
Brian Krebs is a popular reporter on the cybersecurity beat. He regularly exposes cybercriminals and their tactics, and consequently is regularly a target of their ire. Last month, he wrote about an online attack-for-hire service that resulted in the arrest of the two proprietors. In the aftermath, his site was taken down by a massive DDoS attack.
In many ways, this is nothing new. Distributed denial-of-service attacks are a family of attacks that cause websites and other Internet-connected systems to crash by overloading them with traffic. The "distributed" part means that other insecure computers on the Internet -- sometimes in the millions -- are recruited to a botnet to unwittingly participate in the attack. The tactics are decades old; DDoS attacks are perpetrated by lone hackers trying to be annoying, criminals trying to extort money, and governments testing their tactics. There are defenses, and there are companies that offer DDoS mitigation services for hire.
Basically, it's a size vs. size game. If the attackers can cobble together a fire hose of data bigger than the defender's capability to cope with, they win. If the defenders can increase their capability in the face of attack, they win.
What was new about the Krebs attack was both the massive scale and the particular devices the attackers recruited. Instead of using traditional computers for their botnet, they used CCTV cameras, digital video recorders, home routers, and other embedded computers attached to the Internet as part of the Internet of Things.
Much has been written about how the IoT is wildly insecure. In fact, the software used to attack Krebs was simple and amateurish. What this attack demonstrates is that the economics of the IoT mean that it will remain insecure unless government steps in to fix the problem. This is a market failure that can't get fixed on its own.
Our computers and smartphones are as secure as they are because there are teams of security engineers working on the problem. Companies like Microsoft, Apple, and Google spend a lot of time testing their code before it's released, and quickly patch vulnerabilities when they're discovered. Those companies can support such teams because those companies make a huge amount of money, either directly or indirectly, from their software -- and, in part, compete on its security. This isn't true of embedded systems like digital video recorders or home routers. Those systems are sold at a much lower margin, and are often built by offshore third parties. The companies involved simply don't have the expertise to make them secure.
Even worse, most of these devices don't have any way to be patched. Even though the source code to the botnet that attacked Krebs has been made public, we can't update the affected devices. Microsoft delivers security patches to your computer once a month. Apple does it just as regularly, but not on a fixed schedule. But the only way for you to update the firmware in your home router is to throw it away and buy a new one.
The security of our computers and phones also comes from the fact that we replace them regularly. We buy new laptops every few years. We get new phones even more frequently. This isn't true for all of the embedded IoT systems. They last for years, even decades. We might buy a new DVR every five or ten years. We replace our refrigerator every 25 years. We replace our thermostat approximately never. Already the banking industry is dealing with the security problems of Windows 95 embedded in ATMs. This same problem is going to occur all over the Internet of Things.
The market can't fix this because neither the buyer nor the seller cares. Think of all the CCTV cameras and DVRs used in the attack against Brian Krebs. The owners of those devices don't care. Their devices were cheap to buy, they still work, and they don't even know Brian. The sellers of those devices don't care: they're now selling newer and better models, and the original buyers only cared about price and features. There is no market solution because the insecurity is what economists call an externality: it's an effect of the purchasing decision that affects other people. Think of it kind of like invisible pollution.
What this all means is that the IoT will remain insecure unless government steps in and fixes the problem. When we have market failures, government is the only solution. The government could impose security regulations on IoT manufacturers, forcing them to make their devices secure even though their customers don't care. They could impose liabilities on manufacturers, allowing people like Brian Krebs to sue them. Any of these would raise the cost of insecurity and give companies incentives to spend money making their devices secure.
Of course, this would only be a domestic solution to an international problem. The Internet is global, and attackers can just as easily build a botnet out of IoT devices from Asia as from the United States. Long term, we need to build an Internet that is resilient against attacks like this. But that's a long time coming. In the meantime, you can expect more attacks that leverage insecure IoT devices.
This essay previously appeared on Vice Motherboard.
Here are some of the things that are vulnerable.
EDITED TO ADD (10/17: DARPA is looking for IoT-security ideas from the private sector.
Passionate about design and Internet freedom?
The Open Observatory of Network Interference (OONI), a free software project under The Tor Project that aims to uncover Internet censorship by monitoring its prevalence around the world, is seeking a UX designer.
Up until recently, users would run OONI’s software (ooniprobe) from the command line. Soon we aim to release both a desktop (web based) and mobile client that will enable users to run ooniprobe from a graphical user interface. We want to make the user interface as usable and graphically appealing as possible to engage more users.
If you’re interested in designing the interface of OONI’s new desktop and mobile clients, please don’t hesitate to apply! Information on how to apply can be found here.
Squid ink risotto is a good accompaniment for any mild fish.
As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.