Quantcast
Channel: Salinger Privacy
Viewing all 79 articles
Browse latest View live

Hard or soft? The skills needed for a risk-based approach to privacy

$
0
0

This week I had the pleasure of attending a seminar on the Risk-Based Approach to Privacy.  The keynote speaker was Richard Thomas, the former UK Data Protection Commissioner – although as he pointed out in his speech, he never liked the European term ‘data protection’, because privacy is about protecting people.

Richard elaborated on a methodology being developed by the Centre for Information Policy Leadership, in which abstract privacy principles can be applied to a project, according to an assessment of the likely risks and benefits arising.  The idea is to go beyond ‘tick-box compliance’, and get to the point of privacy protection – protecting people from harm.

I like to think that’s what we have already been doing with our clients, when we conduct PIAs!  Indeed, Richard acknowledged that the adoption of Privacy Impact Assessment (PIA) methodologies in Australia and New Zealand is well advanced of other countries.

During the Q&A, I raised a question:  Who should conduct the risk identification for a project, and what should their skills be?  I noted that sometimes people in positions of privilege, such as senior managers experiencing career success who are predominantly white, male and middle class, may struggle to imagine privacy harms that they have never personally experienced, such as discrimination, harassment, stalking or family violence.

In particular, I noted that data about a person’s home address, or increasingly geolocation data which can reveal patterns of behaviour including physical location, is often collected and exposed by organisations in a fairly casual fashion, and yet for some individuals, the exposure of their location data could lead to very serious harm.

(Taking the alternative strict legal approach won’t necessarily assist the lay person to identify the heightened privacy risks in their project either.  For example, privacy laws in Australia don’t recognise location data as ‘sensitive’ in the way that medical records are.)

Richard’s reply to my question noted that senior managers aren’t doing their job well if they don’t know their customer base, and so they should be capable of recognising when those risks might arise for their customers.  The discussion then moved on to examples of companies and government agencies which have got it wrong, and suffered spectacular ‘privacy fails’ as a result.

I suggest that a diverse set of skills is needed to conduct a robust privacy risk assessment.  Legal and analytical skills are certainly needed, and so is the ability to understand how data might be collected, collated and presented to system users and third parties.  But I believe that some underrated ‘soft’ skills like imagination and empathy are required too.

Something we do intuitively when we conduct a PIA for a client – before we even start to analyse compliance with the relevant privacy principles – is to imagine ourselves ‘standing in the shoes’ of their customers or citizens.

We then ask two questions: “What would I expect to happen?” and “How might I be harmed by this?”

If you are doing a privacy risk assessment of a project, start by asking yourself those two questions too.  Avail yourself of whatever resources you can find to help you measure or predict your customers’ expectations, and use your imagination to think of worst-case scenarios.

If you need some prompts to help you imagine the possible harms that might arise, this is where I find Richard Thomas’s work to add the most value.  He has articulated a set of potential ‘privacy harms’, which offers a novel approach for those used to focusing on risk or harm from the organisation’s perspective.

This spectrum of privacy harms has tangible or ‘material’ harms at one end (such as physical harm or threats of violence, stalking and harassment, identity theft, financial loss and psychological damage), intangible or ‘moral’ harms in the middle (such as reputational damage, humiliation, embarrassment or anxiety, loss of autonomy, discrimination and social exclusion), and abstract or ‘social’ harms at the other end (such as the threats to democracy, chilling effect on free speech, loss of trust and social cohesion posed by a ‘surveillance society’).

The task of the privacy professional is to pull together a robust risk assessment, encompassing both sharp analysis of compliance with legislation, and the more imprecise ‘what if’ scenarios generated through imagination and intuition.

If you need further assistance with privacy risk identification, or a formal Privacy Impact Assessment, just give us a call.

(And by the way, ‘stand in their shoes’ is also an exercise we explicitly do with participants in our face-to-face privacy awareness training program.  If you think your colleagues need some help achieving a shift in mindset about the importance of privacy to your organisation, ask us about which training program would work best for you.)


Free search, free speech, and the Right To Be Forgotten

$
0
0

Search engines are wondrous things. I myself use Google search umpteen times a day. I don’t think I could work or play without it anymore. And yet I am a strong supporter of the contentious “Right to be Forgotten”. The “RTBF” is hotly contested, and I am the first to admit it’s a messy business. For one thing, it’s not ideal that Google itself is required for now to adjudicate RTBF requests in Europe. But we have to accept that all of privacy is contestable. The balance of rights to privacy and rights to access information is tricky. RTBF has a long way to go, and I sense that European jurors and regulators are open and honest about this.

One of the starkest RTBF debating points is free speech. Does allowing individuals to have irrelevant, inaccurate and/or outdated search results blocked represent censorship? Is it an assault on free speech? There is surely a technical-legal question about whether the output of an algorithm represents “free speech”, and as far as I can see, that question remains open. Am I the only commentator suprised by this legal blind spot? I have to say that such uncertainty destabilises a great deal of the RTBF dispute.

I am not a lawyer, but I have a strong sense that search outputs are not the sort of thing that constitutes speech. Let’s bear in mind what web search is all about.

Google search is core to its multi-billion dollar advertising business. Search results are not unfiltered replicas of things found in the public domain, but rather the subtle outcome of complex Big Data processes. Google’s proprietary search algorithm is famously secret, but we do know how sensitive it is to context. Most people will have noticed that search results change day by day and from place to place. But why is this?

When we enter search parameters, the result we get is actually Google’s guess about what we are really looking for. Google in effect forms a hypothesis, drawing on much more than the express parameters, including our search history, browsing history, location and so on. And in all likelihood, search is influenced by the many other things Google gleans from the way we use its other properties — gmail, maps, YouTube, hangouts and Google+ — which are all linked now under one master data usage policy.

And here’s the really clever thing about search. Google monitors how well it’s predicting our real or underlying concerns. It uses a range of signals and metrics, to assess what we do with search results, and it continuously refines those processes. This is what Google really gets out of search: deep understanding of what its users are interested in, and how they are likely to respond to targeted advertising. Each search result is a little test of Google’s Artificial Intelligence, which, as some like to say, is getting to know us better than we know ourselves.

As important as they are, it seems to me that search results are really just a by-product of a gigantic information business. They are nothing like free speech.

Bradley Cooper’s taxi ride: a lesson in privacy risk

$
0
0

Hollywood heartthrob Bradley Cooper is a bad tipper.  That was the conclusion drawn by media – though denied by his PR rep – when data about 173 million New York taxi trips became public.

But I drew a different and more disturbing conclusion, which was how easy it is to get privacy ‘wrong’, when a government official is trying to get transparency ‘right’.  Here’s what happened.

In March 2014, the New York City Taxi & Limousine Commission (known by the rather sweet acronym TLC) released under FOI data recorded by taxis’ GPS systems.  The dataset covered more than 173 million individual taxi trips taken in New York City during 2013.  The FOI applicant used the data to make a cool visualisation of a day in the life of a NYC taxi, and published the data online for others to use.

Top marks for government transparency, useful ‘open data’ for urban transport and planning research … but not, as it turns out, great for Bradley Cooper.

Each trip record included the date, location and time of the pickup and drop-off, the fare paid, and any recorded tip.  It also included a unique code for each taxi and taxi driver.

In theory the identity of each taxi and taxi driver had been ‘anonymised’ by the use of ‘hashing’ – a one-way encryption technique which replaced each driver licence number and taxi medallion number with an alphanumeric code, that can’t be reverse-engineered to determine the original information.

However as a computer scientist who found the published dataset pointed out, hashing is not a good solution when you know what the original input might have looked like.  So if you know what taxi numbers look like (and that’s not difficult – they are printed on the side of the taxi), you can run the ‘hash’ against all possible taxi numbers.  It took a software developer less than an hour to re-identify each vehicle and driver for all 173 million trips.  So anyone can now calculate any individual driver’s income for the year, or the number of miles they have driven, or where they were at any given time.  So much for their privacy.

But what’s this got to do with Bradley Cooper, you ask?

Geolocation data exposes behaviour

While the computer science community started debating the limits of hashing and how TLC should have ‘properly’ anonymised their dataset before before releasing it under FOI, an astute postgrad student found that even if TLC had  removed all the details about the driver and the taxi, the geolocation data alone could potentially identify taxi passengers.

To demonstrate this, Anthony Tockar googled images of celebrities getting in or out of taxis in New York during 2013.  Using other public data like celebrity gossip blogs, he was able to determine where and when various celebrities got into taxis.  Using the TLC dataset, Anthony could then identify exactly where Bradley Cooper went, and how much he paid.  (Mind you, cash tips are not recorded, hence the debate about whether or not he is a bad tipper.)

Anthony also developed an interactive map, showing the drop-off address for each taxi trip which had begun at a notorious strip club.  I imagine the same could be done to easily identify the start or end-point for each taxi trip to or from an abortion clinic, a drug counselling service, or the home address of an investigative journalist, suspected whistle-blower or partner suspected of cheating.

As Anthony notes, “with only a small amount of auxiliary knowledge, using this dataset an attacker could identify where an individual went, how much they paid, weekly habits, etc … ‘I was working late at the office’ no longer cuts it”.

Open data or open sesame?

The publication of the NYC taxi dataset illustrates a particular challenge for privacy professionals: how easily an individual’s identity, pattern of behaviour, physical movements and other traits can be extrapolated from a supposedly ‘anonymous’ set of data, published with good intentions in the name of ‘open data’, public transparency or research.

Other recent examples have included the re-identification of ‘anonymous’ DNA donors to the ‘Thousand Genomes Project’ research database, and the re-identification of bike riders using London’s public bicycle hire scheme.  As the blogger who turned the publicly available bike trips dataset into interactive maps noted, allegedly ‘anonymous’ geolocation data, even when months old, can allow all sorts of inferences to be drawn about individuals – including their identity, and their behaviour.

Closer to home, the NSW Civil & Administrative Tribunal has shown they are willing to challenge, rather than blindly accept, assertions by government agencies about the identifiability of data, by accepting that if a “simple internet search” links the data in question back to an individual’s identity, the published data will meet the definition of ‘personal information’, and its publication or disclosure becomes contestable under privacy law.

For privacy professionals, the goalposts are shifting.  An assumption that data has been ‘de-identified’, and is thus not subject to privacy restrictions, may no longer hold true.

Privacy Officers would do well to engage with their colleagues who might publish or release datasets, such as people working in FOI, open data, corporate communications or research, to ensure they understand the risks of re-identification, and know about their disclosure obligations under privacy law.

You don’t want your ‘open data’ to become ‘open sesame’.

That’s a wrap: Privacy Awareness Week 2015

$
0
0

I think I am suffering indigestion, but it’s not from the delicious breakfast served at the opening event to mark Privacy Awareness Week this year.  It’s more like mental indigestion, as my brain tries to absorb all the nutrients found in the smorgasbord of news, insights, opinion, legal developments and regulatory guidance dished up to a hungry audience of privacy professionals in the space of just one week.  Here’s my wrap-up of the events I was able to attend.

Australian Privacy Commissioner Timothy Pilgrim kicked things off with the release of the results of the OAIC’s assessment of the online privacy policies of 20 entities regulated by the APPs.  Disturbingly, 55% of the policies reviewed did not adequately address one or more of the requirements in APP 1, such as how an individual could make a complaint, or access their personal information.  The median length of the 20 policies reviewed was 3,413 words, but over breakfast we heard that the longest was 18,000 words!  The issue of readability was further aired on the ABC TV’s Lateline program that night.

Pilgrim also spoke about the strategic direction of his office, which will be more focused on reviewing entities’ implementation and compliance, rather than issuing guidance, now that more than 12 months has elapsed since the APPs became law.  There were also some morsels about international cooperation in the regulatory space, including the next GPEN sweep which will focus on website and mobile apps impacting on children’s privacy.

And not for the last time during the week, there was mention of the privacy risks posed by the federal government’s new mandatory data retention laws, which include ensuring telecoms have adequate data security measures in place.  For privacy advocates, the silver lining in the data retention cloud is the reinvigoration of the bill to also introduce mandatory data breach notification.

Keynote speaker Mark Pesce – inventor, educator and broadcaster – described his fear that as individuals, we are no longer seen as citizens or even as consumers, but as “data harvesters”, acting blindly on behalf of corporations, which are hijacking our basic human desire to connect and share with others.  Mercifully he ended with his prediction of a more privacy-positive future, in which businesses see a competitive edge in ‘privacy by design’ offerings – albeit with a warning that we risk creating a two-tier society in which only the wealthy can afford their privacy.

An invigorating panel discussion followed, which included other insights into the near-future, such as UQ legal academic Dr Mark Burdon’s offering of the “sensorised home“, in which your health can be monitored through constant urine testing built into your toilet, and the CSIRO’s Dr Christine O’Keefe who spoke about technological developments which will challenge our very notion of what personal information means, such as geolocation data, and eventually even “smart dust”!  (No, I didn’t understand what smart dust means either, but it sounds even scarier than smart toilets, right?)

While Facebook’s Head of Policy Mia Garlick spoke of managing such privacy risks by spreading privacy knowledge and capacity to engage in ‘privacy by design’ throughout an entity by using cross-functional review teams, ZDNet Editor Chris Duckett and Dr Burdon spoke against allowing the market to self-regulate, with Duckett describing companies as sociopaths hell-bent on hoovering up personal information, and Burdon noting that the very philosophy of ‘big data’ is fundamentally opposed to the limitations upon collection and secondary use posed by privacy regulation.

Phew – that was all before 9am on the first day!

Then there was the launch of the OAIC’s new Privacy Management Framework, workshops on how to conduct a PIA (facilitated by yours truly), and the timely release of the controversial Grubb v Telstra determination, which itself moves forward the debate about the meaning of “personal information” and the importance of geolocation data.  (I will write more about the implications of the Grubb case soon.)

Later in the week the NSW Privacy Commissioner Dr Liz Coombs hosted a Privacy Matters Forum, which looked at the link between ‘privacy by design’ and great customer service.  The keynote speaker was Michael Pratt, the inaugural NSW Customer Service Commissioner, who made the link between customer service and privacy by talking about customer-centric thinking in business process design, which engenders the customer trust necessary to underpin the sharing of personal information.  There was some spirited banter from panellists (who included our Associate Stephen Wilson) and the audience about whether privacy really is good for business, but sadly Chatham House rules prevent me from quoting anyone.

Dr Coombs also announced plans for another forum in November, featuring social researcher and prolific writer Hugh Mackay – I’m looking forward to hearing his insights on all things privacy.  Hopefully by then, my breakfast will have gone down, and I’ll be ready for the next course.  Bon appetit!

Where’s Wally? Geolocation and the challenge of privacy protection

$
0
0

Those pesky little digital breadcrumbs are starting to catch up with us.

A recent article in Wired noted that it’s not just your telephony provider who knows where you are – plenty of smartphone apps use a mixture of GPS, Bluetooth and Wi-Fi signals to pinpoint your location whenever you carry your phone.

A recent global ‘sweep’ of more than 1,200 mobile apps by Privacy Commissioners around the world found that three-quarters of all apps examined requested one or more permissions, the most common of which included location.  Disturbingly, 31% of apps requested information not relevant to the app’s stated functionality.  A prominent example is the flashlight app which tracks your precise location, and sells the data to advertisers.

Of course, sometimes location is relevant – we want the convenience of location-driven services, like local restaurant recommendations or weather predictions – but should we be worried about the privacy tradeoffs?

Nah, “it’s all good”, we’re told … “the data is de-identified before we use/disclose/sell it”.

Oh phew, we’re OK then.  Oh no, hang on, wait – not so fast with the complacency!

First, some third parties, like law enforcement agencies, can ask for precise details about you and your location.  They could ask your telephony provider, or the company which runs your phone operating system, or the company which operates the internet browser on your phone, before they even get to the companies which run the apps on your phone.

Second, a recent study suggest that four points of geolocation data alone can potentially uniquely identify 95% of the population.    Mark Pesce, the inventor, educator and broadcaster whose recent keynote address I have written about previously, described the geolocation data collected by and broadcast from our smartphones as “almost as unique as fingerprints”.

In other words – those ‘de-identified’ breadcrumbs are likely leading straight back to you.

Data showing where you have been will not only reveal the obvious, like where you live and work or who you visit, but it may also reveal particularly sensitive information – like if you have spent time at a church or a needle exchange, a strip club or an abortion clinic.  Some app-makers claim they can even tell which floor of a building you are on.  All useful stuff for your boss, ex-boyfriend or insurance company to know.

So what’s the solution?  Wired magazine offers the pessimistic view that the only way to avoid privacy intrusions is to “fry the GPS chip, turn off Location Services, and give up on some of the coolest, most personal tech currently available”.  ZDNet Editor Chris Duckett suggested at the recent PAW breakfast that we need a databreach involving the geolocation data of every politician to kick-start the political will needed for better regulation.

But I like to think that the law already offers a solution.  Indeed, a recent determination from the Australian Privacy Commissioner could be the starting point for more effective regulation of the collection, use and disclosure of geolocation data.  In Grubb v Telstra, the Privacy Commissioner found that journalist Ben Grubb was entitled to access the ‘metadata’ held about him by his mobile phone service provider – the breadcrumbs left behind as he goes about his day.

On the one hand, this determination from the Privacy Commissioner is just common sense, and a matter of fairness.  If a company is prepared to collate information from different sources about a customer in order to provide it to law enforcement, as Telstra admitted it did 85,000 times in 2013-14, then it should be equally prepared to do so if a customer exercises their access rights under the Privacy Act to ask to see all that information too.

On the other hand, this is a ground-breaking decision.  Telstra argued that geolocation data – the longitude and latitude of mobile phone towers connected to the customer’s phone at any given time – was not “personal information” about a customer, because on its face the data was anonymous.  They lost that argument, because the Privacy Commissioner found that a customer’s identity could be linked back to the geolocation data by a process of cross-matching different datasets.

The implications of this case go well beyond the telcos which will have to comply with the new metadata retention laws.  It even goes beyond just geolocation data.  This case has far-reaching consequences for any organisation which deals in any form of ‘big data’.  No-one should think that privacy can be protected simply by leaving out customer names or other identifiers from a database.  Any dataset which holds unit-record level data can potentially be linked to data from other sources, which can then lead to someone’s identity being ascertainable – which means it will meet the definition of “personal information”, and thus must be treated in accordance with the Australian Privacy Principles.  That has implications not only in relation to customer access requests, but also in relation to how that data can lawfully be used.

Think about the use limitation principle.  In theory, personal information should only be used for the purpose for which it was collected (connect your call via the nearest mobile phone tower, play a game or run your flashlight app), or a directly related secondary purpose (billing, complaint-handling and the like).  Any other type of secondary purpose will either need a special exemption (law enforcement, research, etc), or your consent.

(Oh, consent?  Sure, the website and app developers would like you to think that you ‘consented’ to have your location data sucked up and used for unrelated purposes, but seriously – have you even read those T&Cs?  Rather like the Londoners who ‘consented’ to give up their first born child when signing up for free wifi, most of us don’t read T&Cs, because they are longer than Shakespearean plays.  I doubt that many would stand up to scrutiny under Australian privacy jurisprudence, which suggests a customer has not genuinely ‘consented’ to terms buried in a lengthy document, acceptance of which are a pre-condition to gaining goods or services.  When even a monolith like Microsoft is arguing the failure of the American ‘notice and consent’ model of privacy regulation in favour of collection limitation and use limitation principles like those on which Australian privacy law is modelled, it is time we stopped living in the fantasy land of believing that ‘consent’ has anything to do with these types of business practices.)

I believe that we are on the verge of a new awakening, in which people start to recognise not just the opportunities provided by geolocation data, but the threats it can pose – and start to demand privacy protection to match.

Businesses which suck up geolocation data should no longer rely on standard T&Cs to indicate a customer’s ‘consent’ to unrelated secondary uses.  The Grubb v Telstra case suggests they can also no longer argue that “it’s not personal information so you have nothing to worry about”.  Instead, they should get genuinely transparent about unrelated secondary uses, and seek informed, specific and voluntary agreement from their customers – or let our breadcrumbs blow away in the wind.

Privacy in the age of the algorithm: a primer in ethics for using Big Data

$
0
0

Brrr, winter is here! Time to crack open a red to enjoy with a lovely rich home-cooked lasagna. Except hang on – your pasta-buying habits have you marked down as a poor car insurance risk. You’d better hope you have a nice strong handshake to compensate.

The lure of Big Data analytics is that with enough data, organisations can make insightful correlations, which they can then use to make business predictions or decisions. Woolworths’ insurance arm allegedly only makes car insurance offers to those who are considered at lower risk of car accidents, which apparently has something to do with the consumption of red meat over pasta – and it knows what you eat because you shop at its supermarkets.

Of course, correlation is not the same as causation. There is a correlation between ownership of super-yachts and very high incomes, but going out and buying a super-yacht will not deliver you a pay rise. And in any case the correlation may be resting on some shaky assumptions. (What would Woolworths make of my vegetarian friend who drives to the supermarket to buy red meat for her husband and kids, but doesn’t eat it herself?)

Nonetheless, the age of the algorithm is here. From making decisions about who to hire, to predicting when a customer is pregnant, and from delivering targeted search results to predicting the students at risk of failure or the prisoners at risk of re-offending, business and government decisions are being made according to the correlations found through Big Data processing.

At a recent function, UQ legal academic Dr Mark Burdon suggested that in the future ruled by Big Data, our lives will become “trying to beat the algorithm”. In a predictive and pre-emptive world, empathy, forgiveness, rehabilitation, redemption, serendipity, autonomy and free will become so much more difficult.

So at what point do these predictions become too intrusive? When can an organisation even lawfully collect and use personal information for Big Data projects? How does an organisation tread the fine line between being innovative and smart about their decision-making, and being downright creepy?

Following a set of statutory privacy principles is obviously a good place to start, but sometimes mere compliance is not enough. Organisations need to manage customer expectations and accurately measure the shifting sands of public opinion, if they are to avoid a customer backlash.

In fact, getting privacy ‘right’ can be a business enabler. Harvard Business Review recently published case studies of businesses which improved their privacy practices by offering greater customer control and choice, and were rewarded with more useful data from their customers, rather than less.

Customer trust is the key.

There are pragmatic ways to develop customer trust and protect privacy but maintain your business objectives in a Big Data project, like strategically quarantining particular data types, offering personalisation, prompting a gatekeeper review between analytics and operationalisation phases, using role- and needs-based access controls in the presentation of data, finding the right de-identification technique, and establishing effective data governance.

We have drawn together global research into the factors that influence customer trust, and our own experience guiding clients through advanced analytics and business intelligence projects, to develop a framework to balance business objectives with legal and ethical concerns about Big Data.

The resulting eBook is designed to guide organisations through how to engender customer trust, by building privacy protection in to Big Data projects, from the analytics stage through to operationalising insights into decision-making. It’s available now from our Publications page – you can download it while that lasagna’s in the oven.

Man made software in His own image

$
0
0

In 2002, a couple of Japanese visitors to Australia swapped passports with each other before walking through an automatic biometric border control gate being tested at Sydney airport. The facial recognition algorithm falsely matched each of them to the others’ passport photo. These gentlemen were in fact part of an international aviation industry study group and were in the habit of trying to fool biometric systems then being trialled round the world.

When I heard about this successful prank, I quipped that the algorithms were probably written by white people – because we think all Asians look the same. Colleagues thought I was making a typical sick joke, but actually I was half-serious. It did seem to me that the choice of facial features thought to be most distinguishing in a facial recognition model could be culturally biased.

Since that time, border control face recognition has come a long way, and I have not heard of such errors for many years. Until today.

The San Francisco Chronicle of July 21 carries a front page story about the cloud storage services of Google and Flickr mislabelling some black people as gorillas (see updated story). It’s a quite incredible episode. Google has apologised. Its Chief Architect of social, Yonatan Zunger, seems mortified judging by his tweets as reported, and is investigating.

The newspaper report quotes machine learning experts who suggest programmers with limited experience of diversity may be to blame. That seems plausible to me, although I wonder where exactly the algorithm R&D gets done, and how much control is to be had over the biometric models and their parameters along the path from basic research to application development.

So man has literally made software in his own image.

The public is now being exposed to Self Driving Cars, which are heavily reliant on machine vision, object recognition and artificial intelligence. If this sort of software can’t tell people from apes in static photos given lots of processing time, how does it perform in real time, with fleeting images, subject to noise, and with much greater complexity? It’s easy to imagine any number of real life scenarios where an autonomous car will have to make a split-second decision between two pretty similar looking objects appearing unexpectedly in its path.

The general expectation is that Self Driving Cars (SDCs) will be tested to exhaustion. And so they should. But if cultural partiality is affecting the work of programmers, it’s possible that testers have suffer the same blind spots without knowing it. Maybe the offending photo labelling programs were never verified with black people. So how are the test cases for SDCs being selected? What might happen when an SDC ventures into environments and neighbourhoods where its programmers have never been?

Everybody in image processing and artificial intelligence should be humbled by the racist photo labelling. With the world being eaten by software, we need to reflect really deeply on how such design howlers arise. And frankly double check if we’re ready to let computer programs behind the wheel.

Is Barbie the new Big Brother? The Internet of Things is here

$
0
0

Is it just me, or are things starting to get genuinely creepy around here?

I’m not just talking about the trailer for the new TV show Humans, which looks like a gripping piece of sci-fi drama set in the not-too-distant future. I’m talking about the here and now.

Barbie dolls embedded with voice recognition that enables the doll to hold a conversation with a child, and ‘smart’ TV sets that record what their watchers are saying, and report back to … somewhere. Watches that not only monitor your heart rate, but match that data against your calendar for analysis. Vacuum cleaners that can recognise the objects they bump into.  Air-conditioners that know who is in the lounge room, and what their preferred temperature is. Blood pressure monitors that report straight to your hospital.

Sometimes it feels like the word ‘smart’ has become short-hand for ‘surveillance-based business model’. Welcome to the brave new world of the Internet of Things (IoT), in which every little device seems intent on collecting data about you, and likely beaming it to somewhere else for analysis and re-use.

There are plenty of ethical concerns raised about IoT devices. Objections to the ‘Hello Barbie’ go beyond the distasteful (but surely not unexpected) concern about direct marketing of other products to children, to include the recording and analysis of the words of children who have little understanding that they are being monitored, let alone how what they say might be used.

There are also sensible questions being asked about the legal and moral responsibilities of the companies collecting data from connected devices.

What should Mattel do if a child’s words, as interpreted by their Barbie doll, suggest they are suffering physical abuse? How should the doll respond if a child asks about death, or God, or where babies come from? What if another family member is overheard making threats?

Can Fitbit data be used in personal injury law suits? Can your fridge dob you in to your health insurer?

The security flaws when everything is connected

Then there are the security concerns. One report suggests that the average IoT device has 25 security flaws. If hackers can seize control of a moving car, how hard will it be for the bad guys to take over other, cheaper ‘smart’ devices?

If your child’s doll gets hacked, is it going to turn around and bully your child? Or perhaps it will start asking your child to reveal information about your family, like the password to the home security system, and when you will be away on holidays.

Maybe your vacuum cleaner will start broadcasting video from your home. Or perhaps extortionists will disable all the heating systems in connected homes unless the home owner pays up.

Conditioning us to pervasive surveillance?

And then there are the privacy issues.

The NSW Privacy Commissioner has written about the challenges that the IoT poses for our privacy laws, as devices with unique identifiers push the boundaries of what is regulated as “personal information”. Even devices which might be used by more than one person, such as smart meters within a home, are capable of identifying individuals within a group of users, simply from their patterns of behaviour.

But it’s not just about whether any given device and its manufacturer are regulated by a set of privacy principles. The IoT raises deeper concerns about whether our enthusiasm for ‘smart’ devices and connectivity is conditioning us to a world of pervasive surveillance, and automated decision-making.

Associate Professor Mark Andrejevic and Dr Mark Burdon have written about what they call the ‘sensor society’, in which the always-on interactive device is doubling as a tool for constant, passive data collection. Every connected device is capable of being a ‘sensor’, and monitoring its users. This turns privacy principles such as collection limitation, and limits on secondary use of data, on their head: “the function is the creep”.

But does it have to be this way? Are we going to see a push-back, as businesses realise the benefit of instead choosing privacy?

It’s one thing for businesses to spout platitudes like “we take our customers’ privacy seriously”, and another thing entirely to embrace Privacy by Design. But it can be done. We’ve helped a number of our PIA clients translate privacy law into design rules for business processes and solution architecture.

Now it looks like some leading businesses are doing the same. Car manufacturer Audi has recently rejected Google’s industry-partnership push to develop internet-assisted driving, with their CEO noting Germans’ reservations about data collection and describing the car as like “a second living room – and that’s private.”

And Apple is keen to distinguish itself as the pro-privacy player in the consumer tech space, designing its latest operating system to keep personalised data only on the device, instead of being shared.

I’ll put it in words that surely Barbie would understand. Maybe, just maybe, privacy is the new black.


Let’s take a ride on the privacy law reform merry-go-round

$
0
0

So, I have been approached by a NSW Parliamentary committee to make a submission on whether or not we need a statutory cause of action for serious invasions of privacy.

My first thought was: why bother? We’ve been on this merry-go-round before. The ink is barely dry on the comprehensive, considered and balanced review conducted on this very topic by the Australian Law Reform Commission. The NSW Law Reform Commission also had a swing at this topic a few years back. Nothing has changed. No new laws, no new remedies.

Why should I waste my breath to answer the same question, to generate the same recommendations, the nuances of which will then be misrepresented by the media and dismissed or ignored by successive governments?

But my second thought was: I’d better at least read the terms of reference first. And lo and behold, the terms of reference also include inquiring into and reporting on the adequacy of existing remedies for serious invasions of privacy.

Well, here’s something that the Legislative Council’s Standing Committee on Law and Justice might just be able to sink their teeth into, and maybe – just maybe – could persuade the Attorney-General to immediately act upon: fixing the problems with PPIPA, the key privacy statute in NSW.

So my answer is yes. YES. Yes, we need better remedies for invasions of privacy. Because the law is failing us now.

Here’s a few examples of why.

Emerging privacy issues

The latest moral panic in privacy world is over the privacy-invasive nature of drones. Or maybe this week it is Big Data, or geolocation data, or maybe the Internet of Things. It’s hard to choose.

People like to say that the law doesn’t keep up with technology. That’s only half true.

Australian privacy law is designed to be technology-neutral, so that our laws don’t become obsolete a millisecond after they are written. (Unlike in the USA, where they have specific laws about things like the privacy of your VHS video rental records …)

Our flexible, principles-based privacy laws actually have plenty to say about what data can and can’t be collected, what can or can’t be disclosed, the need to ensure the accuracy, integrity and security of data, and everything else in between. These principles can be applied to drones or Big Data, just as they can be applied to paper files. In other words, the conduct could be regulated easily enough.

But the problem lies in the gaps where our laws don’t regulate: the person or body doing the conduct.  There is also a failure of enforcement. This is why people think – incorrectly – that the law is outdated. It’s not outdated. It’s just not applied widely or deeply enough.

The black holes where the law doesn’t apply

These are pretty well documented, so here’s just a quick re-cap of all the privacy invaders who are not regulated by either NSW or federal privacy law.

Individuals not operating a business. So that revenge porn posted online? Not regulated.

Businesses with an annual turnover of less than $3M (except for health service providers). So the videographer flying drones over residential properties and filming people in their backyards? Not regulated.

Media organisations. In the business of outing Ashley Madison users on the air for entertainment value? Publishing photos of celebrities and royals in their private moments? Using a helicopter to film a family on their private property, grieving over a dead child? Not regulated.

Political parties. Hoovering up data from petitions, letters to newspapers and approaches to constituents’ local MPs, mining it to make assumptions about political opinions, and then crafting messages skewed to individual voters? Not regulated.

State-owned corporations in NSW. Public utilities which hold property, consumption, billing and payment data about land owners and residents. Not regulated.

The failure of enforcement

NSW has only a part-time Privacy Commissioner, who does not have enough staff or an independent budget, let alone any powers to levy fines or compel privacy-invaders to do anything.

Although in NSW we are blessed with a Tribunal which offers some (relatively) cheap access to justice for unrepresented complainants, the maximum compensation that can be ordered to be paid by a privacy invader to their victim is $40,000. The Tribunal has noted this is too low in serious cases of malicious breaches causing severe financial and psychological harm.

The ridiculous loopholes

And then, for the remaining public sector agencies that are actually regulated by PPIPA, there remain some unjustifiable loopholes, unique to NSW. Loopholes that are so wide you could drive a truck full of privacy-invaders through them, and still have room for a parade of dancing elephants on either side.

The Bad Cop Exemption

First up, s.27 of PPIPA.

I am a firm believer that the public interest in protecting privacy must be balanced with the public interest in effective law enforcement. There are indeed sensible exemptions for investigations and law enforcement which seek to achieve that balance.

And then there is s.27, which adds on top an entirely unnecessary blanket exemption for all police activities, other than educative or administrative ones. The effect of s.27 has been to render many police activities unaccountable in terms of privacy protection, even where a police officer acts corruptly or unlawfully – because negligent, reckless, unlawful or corrupt conduct is not an ‘administrative or educative function’.

So unlawful police behaviour like obtaining personal information by way of an invalid subpoena? Exempt.

Malicious police behaviour like disclosing information about the sexuality of a woman to her boyfriend, which results in the women being assaulted by her enraged partner? Exempt.

A negligent or reckless failure to check a child protection allegation which the police “know is false or should reasonably be expected to know to be false” before acting on it? Exempt.

Systemic problems like a failure to ensure the accuracy of bail records, so that hundreds of kids end up wrongly arrested or imprisoned? Exempt.

A failure to enforce data retention rules, so that decades-old ‘spent’ convictions are disclosed to a man’s partner and employer? Exempt.

Poor data security practices like a single shared login, no register of authorised users and no staff training when accessing public street CCTV footage? Exempt.

You can have blanket exemptions which allow corruption and negligence to thrive, or you can have nuanced, sensible, balanced exemptions to enable legitimate law enforcement, but allow remedies for victims of illegitimate police conduct. Please, Parliamentary Committee – recommend abolishing s.27.

The Not In NSW Exemption

Then there is the why-is-this-still-not-fixed s.19(2) problem.

Back in 2008, the Tribunal found that s.19(2) “covers the field” for transborder disclosures (i.e. disclosing personal information to a person or body outside NSW), and therefore s.18 (the regular Disclosure principle) does not apply. Except that s.19(2) has never actually commenced. The outcome of that 2008 GQ case was that in the Tribunal’s view, there are no restrictions on disclosures outside NSW.

The effect is that a public sector agency in NSW which wants to disclose something it shouldn’t, and which would breach the general prohibition against disclosure at s.18, can circumvent the law by simply sending the information to someone outside NSW.

Just let that sink in for a bit – a public sector agency can disclose anything it likes, without being in breach of PPIPA, so long as it first sends it to someone outside NSW. A journalist in Canberra, for example.

So, a public sector agency could disclose the Premier’s mental health records; or the Attorney General’s criminal records; or records about the Police Minister’s non-payment of his council rates – assuming any such records existed – without breaching PPIPA, so long as it was sent outside NSW.

This is an outcome Parliament surely did not intend.

In GQ the Tribunal stated that the situation could be remedied by the Privacy Commissioner making a Privacy Code of Practice, but this is not true; the Privacy Commissioner can only ‘prepare’ a Code under s.19(4). It can only be ‘made’ into law by the Attorney General. Whether by way of a Code, or an amendment to the Act, political will is needed to fix this problem.

After the GQ decision in 2008, commentators including yours truly ranted and raved about this outrageous and ridiculous outcome. But seven years later, nothing. No Code, no amendment to fix the law.

In the meantime, another case has come and gone, with the same outcome: a disclosure to a woman’s employer that would have been found in breach of PPIPA if the employer had been in NSW, but because the disclosure was made to someone in the Northern Territory, it is magically exempt.

The Not Our Fault Exemption

There is also the Personal Frolic Exemption at s.21.

This one has conveniently allowed public sector agencies to avoid having to provide any redress to victims of privacy breaches caused by the conduct of their employees, by arguing that the employee wasn’t really acting as an employee when they did that bad thing, so the agency cannot possibly be held liable. Which sounds fine in theory, but leaves the victim with zero redress. The corrupt use and disclosure provisions in Part 8 of PPIPA offer no remedy to the victim of privacy harm.

So the act of looking up a person’s criminal record without authority and using it to blackmail him? Exempt.

A school teacher looking up student medical records and disclosing them to a local soccer club? Exempt.

The unauthorised disclosure of the contents of a complaint letter by an employee of a local council to the person who was the subject of the complaint? Exempt.

The disclosure of a student’s university grades by an employee of the university to her ex-husband? Exempt.

Our submission

Are existing remedies adequate, in relation to serious invasions of privacy? No.

Should a statutory cause of action for serious invasion of privacy be introduced? Yes.

But first, please – start with fixing PPIPA.  Let’s get off this merry-go-round, and actually fix the law.

 

This submission is drawn from our experience consulting to NSW public sector agencies on privacy matters since 2004, as well as from PPIPA in Practice, our annotated guide to the Privacy and Personal Information Protection Act 1998 (PPIPA), which incorporates consideration of the more than 320 cases decided to date under PPIPA and the Health Records & Information Privacy Act 2002. For more information see www.salingerprivacy.com.au.

There’s more than one way to bake a pia

$
0
0

Although it is great to see Privacy Impact Assessment (PIA) being discussed in mainstream media, the recent Lateline program on ABC TV was also depressing in its conclusion: that PIAs are not being done routinely (and if done, are mostly not being done ‘properly’), even when the privacy issues are most acute – as is typically the case with major national security initiatives.

But how do you know when to do a PIA? And how are you supposed to know if you are doing it ‘properly’?

The analysis underpinning Lateline’s story was this report from privacy advocate Roger Clarke. He developed a five-factor test, to judge 72 national security initiatives, legislative or otherwise, introduced since 2001.

Clarke reviewed:

  • whether there was evidence of a PIA being performed
  • whether advocacy organisations were aware of the PIA
  • whether advocacy organisations were engaged in the PIA
  • whether the PIA Report was published, and
  • whether advocacy organisations’ views were appropriately reflected in the PIA Report.

He concluded that only three of the 72 initiatives passed this test.

There is a conflict of interest here – not only is Roger Clarke the immediate past Chair of one of the advocacy organisations he expects to be consulted, the Australian Privacy Foundation (APF), but he also runs a privacy consultancy business, offering PIA services – as do we. So he is sitting in judgment on not only himself, but also his professional competitors. (Luckily for us, Salinger Privacy got Clarke’s stamp of approval for two of the three PIAs he deemed to be sufficient; his own was the third. And my own declaration: I was also an active member of the APF, including two years as Chair, from 2004 to 2007.)

I am a big fan of stakeholder consultation when conducting PIAs. It’s common sense project management. Why wouldn’t you want a ‘heads up’ on what your biggest critics might think or say? And if your initiative is a major national security project or piece of legislation affecting large numbers of citizens or visitors, then absolutely, meet with the APF, EFA, CCL or Liberty Victoria, and others. You might be surprised at how they can assist.

But is engagement with privacy or civil liberties advocates a pre-condition of what makes a ‘proper’ PIA? No. Sometimes the stakeholders to consult with will be purely internal; or they might be individuals or organisations representing your customers.

I think the question of whether or not a PIA has been done ‘properly’ is too subjective to be tested at all. I like to say that PIAs are more art than science. They don’t sit easily with black letter lawyers.

Actually, PIAs are more like cooking than either art or science. A privacy impact assessment has to take the business objectives of the project, whisk it thoroughly with some law that is already ‘fuzzy’, and then stir in a measure of stakeholder input, a good dollop of community expectations, and a pinch of unpredictability. And don’t forget to set the oven dial to ‘Privacy by Design’.

There are cookbooks like the OAIC PIA guide to help you along your way. There are handy lists of the ingredients that might trigger a PIA, or the questions that you might ask.

But the ultimate tests are: Have you identified all the privacy risks that might arise? And then, have you found ways to mitigate those risks?

The proof of that pudding will only ever be in the eating.

Don’t throw out the baby with the bath water on donor privacy

$
0
0

There is a debate going on in Victoria about when it is acceptable to override the wishes of someone who has explicitly refused their consent for their identity and information to be shared. Or in other words – when it is OK to break a privacy promise.

Promises of perpetual anonymity that were made to sperm and egg donors in the 1980s and 1990s are now under threat, as the Victorian government is proposing to put the rights of donor-conceived children to know about their biological parents ahead of the privacy rights of the donors.

Before 1998, donors were promised anonymity unless they consented otherwise. The proposal now is to make their identity and contact details available, even if they have not consented. Concerns have been raised that the actual donors to be affected by the proposal have not been contacted to be warned about the proposed legal change, let alone been given an opportunity to argue against the proposal.

This debate tends to characterise the issue as pitting the rights of a few thousand donor-conceived people against the rights of a thousand or so donors. They each have legitimate and compelling stories to tell about why they deserve to know, or why they deserve to be left alone. But that’s not the whole story, because privacy is both a private concern and a public interest. We all have a stake in this argument.

Upholding the privacy of donors is not only a private concern for those donors. There is a public benefit gained when both individuals and the community as a whole feel they can trust the privacy promises that are made to them.

Yet disturbingly, the discussion paper does not mention privacy once – although it does talk about anonymity. Neither the Victorian Commissioner for Privacy and Data Protection, nor the Health Services Commissioner who administers health privacy legislation in Victoria, appear to have made any public statement on the proposal.

If this proposal goes ahead, how might it impact on the degree to which people trust any promises about confidentiality or anonymity made by the government, or across the health sector? One of the doctors responsible for thousands of IVF procedures has described the proposal as making liars out of all the doctors who promised anonymity to their patients.

There is the risk that important public health initiatives could be undermined, if people start to have second thoughts about accessing other ‘anonymous’ health services such as sexual health clinics, alcoholic counselling or safe injecting rooms; or if people stop trusting in promises made about the limits placed on secondary use or disclosure of their health information. Will this impact on population-wide programs like vaccinations or cancer screening? Will this impact on the faith people put in shared e-health records?

It’s a similar problem to the one facing the custodians of blood samples, collected from almost every baby born in Australia since the 1970s, and stored on ‘Guthrie cards’. For the first few decades during which the blood tests were administered, it was inconceivable that a dried spot of blood on a small piece of cardboard could be tested decades later for DNA. In 1997 the Western Australia police, investigating a case of suspected incest, came asking for the Guthrie cards of members of the suspect’s family – without the consent of children or the birth mothers. The Perth hospital handed them over, and the offender was convicted.

But the hospital was worried about the potential loss of trust in their newborn screening program; they worried that the realisation that the police had access to what amounted to a population-wide DNA database would cause parents to stop screening their newborn babies for diseases like cystic fibrosis that require early intervention. So the hospital took the radical step of destroying decades’ worth of Guthrie cards. They now only keep the blood samples for two years after birth.

In other words, maintaining public trust in patient privacy was seen as so critical to the success of a public health program, that they gave up all the potential law enforcement and research benefits of keeping the records, rather than risk a loss of public trust in the program itself.

Preserving their privacy is not only in the interests of sperm and egg donors, who are only asking to keep what they were promised decades ago. Maintaining their trust works for all of us, because patient trust is the glue that holds together so many public health initiatives – the benefits from which we all get to share.

Creepiness is in the eye of the beholder

$
0
0

Happy Halloween dear readers! As you carve your pumpkins, decorate your house with plastic spiders and work on your scary costumes, it seems an apposite time to reflect on … creepiness.

Privacy practitioners are often called upon to determine whether or how a particular initiative can comply with privacy law. But often a more compelling and relevant question is: will this project fly, or will it crash and burn?

The answer to both questions is often “it depends”. Privacy principles themselves are fuzzy law, meaning they offer plenty of blank space around “reasonable expectations” and “take reasonable steps” that the practitioner has to fill in. And then even if you do comply with the law, a backlash from your customers or the wider public can bring your project undone faster than you can say “Australia Card”.

So how are we supposed to figure out in advance what will be considered unduly privacy invasive, and what won’t?

A couple of my favourite privacy academics, Omer Tene and Jules Polonetsky, have proposed a Theory of Creepy to help us figure it out. They suggest that creepiness arises when new technology rubs up against social norms. Sometimes social norms shift to accommodate the new technology – but sometimes they don’t, and a consumer backlash ensues.

Tene & Polonetsky have provided examples of the kinds of activities that might be seen as cool – or creepy – depending on the context:

  • Ambient social apps: These take publicly available location data about people and present it to other users, which might end up being regarded as cool (Foursquare) or creepy (Girls Around Me which showed the social media profiles of women physically near the user at any given time; or the Obama app used in the 2012 US election campaign, which plotted voters’ names, age, gender and political leanings on maps of residential areas).
  • Social listening: This involves monitoring customers via social media, and anticipating their needs or responding to their concerns. This kind of intensive surveillance of and approaches to individual customers can be regarded as brilliant best-practice marketing (KLM’s Surprise initiative), or a disastrous cross-over into stalking behaviour (British Airways’ Know Me initiative).
  • Data-driven direct marketing: Using a customer’s history of past purchases to offer suggestions as to what they might like to purchase now, which can be seen as expected business practice (Amazon’s book recommendations), or as so surprising that it becomes the case study in what not to do (Target’s marketing to a teenager it figured out was pregnant before her father did).
  • New products: The failure of Google Glass wearers to anticipate and adhere to social norms led to the creation of a new epithet – Glassholes – and sent the product back to the drawing board.

Their conclusion is that any new project requires a social or ethical value judgment to be made – and that this judgment should not be left to the engineers, marketers or lawyers. As Tene & Polonetsky say: “Companies will not avoid privacy backlash simply by following the law. Privacy law is merely a means to an end. Social values are far more nuanced and fickle”.

In my view, that fickleness is the core of the problem. Humans are not rational or consistent in their responses. Why was British Airways’ social listening deemed creepy, but KLM’s deemed cool? Both involved unexpected online identification and analysis of customers waiting for flights, and then real-world interactions with those customers.  Why do we accept CCTV manned by unseen agents, but not a guy with a camcorder? I don’t have the answer.

As our Associate, Stephen Wilson has written: “The most obvious problem with the Creepy Test is its subjectivity. One person’s ‘creepy’ can be another person’s ‘COOL!!’”.

So, how can we avoid creepiness, if we can’t predict where a new initiative will fall on the cool-to-creepy continuum?

Like Stephen, and unlike Tene & Polonetsky, I still find privacy law to be our best starting point. Yes it is fuzzy law, yes it has ridiculous loopholes. But at their core, privacy principles represent both common sense and good manners: Only collect what you need. Ask the subject for the information directly. Tell them what you’re going to do with it. Don’t go using data in new and surprising ways. Don’t expose the data unnecessarily. Etcetera.

As I have written before, there are pragmatic ways to develop customer trust and protect privacy but maintain your business objectives, and in many ways, the privacy principles themselves point the way.

The value of proposing a ‘creepiness’ test as part of project management is that it might be a useful way to start a conversation with your marketing department, your engineers or even your CEO, if talking about the law tends to send them to sleep. Of course anticipating and reflecting community expectations is also critical to fleshing out your analysis of potential privacy pitfalls. But ultimately, our principles-based privacy law is the best place to start.

Other than on Halloween, better to be cool than creepy.

A bridge too far: 85% of the world ignored at ‘international’ conference

$
0
0

Ah, Amsterdam. You can ride a bike, you can travel the canals by boat, you can walk around happily by yourself (ideally scoffing from a paper cone of hot frites doused in mayonnaise) or you can catch a tram with the locals, but you cannot escape one thing – the bridges. So many, many bridges. 1,539 of them, apparently.

Just like this year’s Eurovision song contest (but sadly without the glitter, outrageous outfits or Guy Sebastian), the theme for this year’s Data Protection Commissioners’ conference in Amsterdam was “Building Bridges”. I discovered this ‘bridges’ metaphor is extremely malleable. Every speaker somehow managed to make their topic related to bridges. It would have made a fun drinking game to down a shot every time you heard a speaker use the B word, but the fun might have worn off by the 57th vodka in the first panel session alone.

People then starting talking about “functional bridges”. (Really? Would anyone build a non-functioning kind? My Year 6 teacher would have been all over that tautological nonsense.) By the end of the first day I felt an itching desire for some metaphorical dynamite to blow up all the metaphorical bridges.

But leaving aside the tortured use of language, more frustrating was the exclusionary, EU/US-centric discussion.

These conferences have often tended to focus on analysing – to death, sometimes with heated debate from people holding entrenched positions – the differences between and relative merits of the EU and the US approaches to privacy regulation. The EU approach is similar to Australia’s approach: generalist privacy principles built into omnibus (aka cross-sectoral) privacy laws with specialist privacy regulators. The US approach is to have only sector-specific privacy laws (e.g. a video rental privacy law, financial privacy regulations, a health insurance privacy Act …), but also a powerful consumer protection regulator with a broad remit, in the form of the US Federal Trade Commission.

Occasionally at these conferences someone like me from TROTW (the rest of the world) will raise their hand and point out (a) that TROTW indeed exists, and (b) we tend to not argue about whose system is better, we just want to get on with it, and can we please now talk about concrete examples instead of theories?

2015 was supposed to be different. The ‘building bridges’ theme was intended to put aside all debates about the relative merits of legal approaches, let bygones be bygones, and instead focus on practical solutions to privacy problems.

“Hooray!” I thought. I was hoping for some enlightening discussion – nay, even actual examples! – of how to turn abstract legal principles into concrete operational decisions and systems design. But no such luck.

We were back to discussion of EU/US data transfers, to the exclusion of almost everything else. Each topic was seen through the lens of the recent unravelling of the Safe Harbor scheme, leading to much hand-wringing about what American companies and EU regulators are supposed to do now about their transborder disclosures. Back to square one: “my legal system is better than yours, nah nah nah nah nah”, and the more positive, but naïve, “hey, maybe we should instead design a universal set of privacy principles to cover both of us!” It was left to representatives from TROTW, like Colombia and Canada, to point out that the creation of a ‘universal’ set of privacy rules, as proposed in the 2009 ‘Madrid Resolution’, is not the task of the EU and US alone.

While the Privacy Bridges project itself was understandably written to meet its terms of reference, which were confined to non-legislative ways to “increase the transatlantic level of protection of personal data” – i.e. manage privacy issues arising from transborder disclosures between companies in the EU and the US – making the Privacy Bridges report the primary focus of this “international” conference was a disappointing waste of the time and expertise of the individuals assembled there.

The narrow focus on multinational companies sidelined voices from civil society, as well as our client base: small-to-medium businesses, NGOs and public sector agencies facing privacy challenges which have nothing or little to do with multinational transborder data flows. The topic of discussion offered nothing new or pragmatic to take away on privacy hot topics like Big Data, drones, the Internet of Things, re-identification or geolocation data.

And the arrogance was taken to squirm-inducing new heights when those of us from TROTW were repeatedly asked to identify ourselves in the audience; we were then told that the ten ‘bridges’ described in the report “may also be useful for those of you in the rest of the world”. Oh please. That is as patronising and offensive as a former Australian Prime Minister coming to Europe and telling you how to deal with the Syrian refugee crisis.

It was almost embarrassing to see how an “international” conference managed to sideline both the privacy concerns of, and the privacy expertise from, the 164 countries that are not either America or in the EU; that’s 85% of the 193 members of the UN. There are billions – yes, billions – of people who live nowhere near the Atlantic Ocean. China and India, anyone?

In my view, the ‘ten bridges’ report presented only a nice list of things to talk about, for a closed circle talkfest of European regulators and American businesses, while Rome burns.

I hope that next year’s conference organisers, including the hosting data protection authority of Morocco, are willing to take a more inclusive view of our world, and thus allow an array of truly international voices to be heard, and wider expertise to be shared amongst the privacy community.

Smile! You’re on someone’s facial recognition database

$
0
0

Hooray, December! A time for work Christmas parties, end-of-year school concerts, days at the cricket and holidays at the beach. So many Instagram-worthy moments. But wait just a tinsel-hanging second – have you got consent to take or post that photo?

From the first Kodak Brownies to this year’s must-have Christmas present – drones – photography has never been far from consideration in privacy debates. Indeed the modern notion of a ‘right to privacy’ was prompted by a couple of lawyerly types who wrote in 1890 about a new technology which “invaded the sacred precincts of private and domestic life”: “instantaneous photographs”.

Warren and Brandeis articulated the need for a legal remedy “for the unauthorized circulation of portraits of private persons”. How disappointed they would be now, to learn that we still haven’t found a way to remedy, let alone prevent, revenge porn.

(I imagine this might not be the only thing to disappoint a couple of jurists concerned with privacy and dignity, were they magically transported from 1890 to 2015. Kim Kardashian and belfies also spring to mind.)

It is perhaps not surprising that the photography and videography practices of news media outlets are often pushing the boundaries of what is considered acceptable. Particularly in the UK, the leading privacy cases tend to arise from a celebrity pushing back against the intrusive nature of the press. From model Naomi Campbell being photographed outside a Narcotics Anonymous clinic, to singer Paul Weller’s children being photographed on a public street, the debate has raged about reasonable expectations of privacy in a public place, and privacy versus free speech.

However it is not only media outlets getting in trouble for inappropriate photography.

From the rich vein of case law in my home State alone, we have seen government agencies brought before the Tribunal to answer privacy complaints about council rangers taking photographs of building works, photographs of home interiors being used in marketing, staff security pass photographs being used for secondary purposes, photographs of children taken by a maritime safety inspector, and of course the streamed supply of CCTV footage to a shared terminal at the local police station.

But my personal favourite remains the what-was-he-thinking ‘PJ snapper’ case, in which a public servant, staying in shared accommodation with community representatives attending an interstate meeting, thought it would be a good idea to take a photograph of a female colleague in her pyjamas as she was waiting to use the share bathroom. But it didn’t end there. He then distributed the photographs on CDs to other staff. (Oh, the quaintness of distributing photos on a CD. So last decade.)

A more recent deployment of technology to invade the privacy of people potentially in their PJs involved a news outlet using a drone to film a cricket match in New Zealand, with the drone flying within 10 metres of a nearby apartment building. Sky TV argued that they had the consent of two women on a balcony to film them, as judged by their ‘hand gestures’. However another resident gave the drone ‘the fingers’, and subsequently made a privacy complaint.

I know that we privacy professionals like to debate what is express versus implicit consent, and opt-in versus opt-out, and written versus verbal, but surely we can all agree that giving a camera ‘the fingers’ – or indeed only one finger – is an effective, if inelegant, way of indicating one’s refusal of consent to be photographed.

Turning away from the camera is another – assuming you know it is there. Though apparently Facebook is working on improving its facial recognition software so that it can recognise you even if you have turned away.

Which brings us to the ‘Capability’, which sadly is not the title of the new Jason Bourne movie, but is in fact the short name for the new national facial recognition and photo-matching database. With a federal government budget of $18.5M, the system is intended to draw together photographs taken by federal, State and Territory governments for existing ‘evidence of identity’ documents, such as driver licences and passports.

This is bad enough – I am already worried that DFAT might still have the passport photo taken of me in 1985, the day after I had spectacularly failed in my parent-defying attempt to bleach my hair to look like a surfie chick, and instead had turned myself Ronald-McDonald-fluoro-orange. My mother’s very inventive punishment was to have my passport photo taken before letting the hairdresser fix my hair. I don’t know what was worse about that photo – the orange hair, or the tear-streaked face of teenage misery? Ten years I had that passport. TEN YEARS. (It even prompted a new Dad joke, usually told to much eye-rolling as our bedraggled family stood in front of an immigration official: “What’s the definition of jetlag? When Anna actually looks like her passport photo.”)

But bad passport photos aside, the ‘Capability’ might go well beyond the photos you at least knew you were having taken for government EOI purposes. Officials recently admitted that users of the system, such as police agencies, could also add in photographs drawn from other sources, such as social media sites.

Just how well the internet giants are managing the accuracy of their own facial recognition programs has been brought into question, but nonetheless, the prospect of photos taken at parties or on holidays, and tagged with your name, ending up in law enforcement hands is a disturbing example of the unexpected re-use of your personal information.

So this Christmas, think before you snap or post.  And please – no belfies.

 

Photograph (c) Shutterstock

Find your friends … and then invade their privacy!

$
0
0

The highest court in Germany has ruled that Facebook’s “Find Friends” function is unlawful there. The decision is the culmination of legal action started in 2010 by German consumer groups, and confirms the rulings of other lower courts in 2012 and 2014. The gist of the privacy breach is that Facebook is illegitimately using details of third parties obtained from members, to market to those third parties without their consent. Further, the “Find Friends” feature was found to not be clearly explained to members when they are invited to use it.

My Australian privacy colleague Anna Johnston and I published a paper in 2011 examining these very issues; see Privacy Compliance Problems for Facebook, IEEE Technology and Society Magazine, V31.2, December 1, 2011, at the Social Science Research Network, SSRN.

Here’s a recap of our analysis.

One of the most significant collections of Personally Identifiable Information (PII) by online social networks is the email address books of members who elect to enable “Find Friends” and similar functions. This is typically the very first thing that a new user is invited to do when they register for an OSN. And why wouldn’t it be? Finding friends is core to social networking.

New Facebook members are advised, immediately after they first register, that “Searching your email account is the fastest way to find your friends”. There is a link to some minimal explanatory information:

    • Import contacts from your account and store them on Facebook’s servers where they may be used to help others search for or connect with people or to generate suggestions for you or others. Contact info from your contact list and message folders may be imported. Professional contacts may be imported but you should send invites to personal contacts only. Please send invites only to friends who will be glad to get them.

This is pretty subtle. New users may not fully comprehend what is happening when they elect to “Find Friends”.

A key point under international privacy regulations is that this importing of contacts represents an indirect collection of PII of others (people who happen to be in a member’s email address book), without their, knowledge let alone authorisation.

By the way, it’s interesting that Facebook mentions “professional contacts” because there is a particular vulnerability for professionals which I reported in The Journal of Medical Ethics in 2010. If a professional, especially one in sole practice, happens to have used her web mail to communicate with clients, then those clients’ details may be inadvertently uploaded by “Find Friends”, along with crucial metadata like the association with the professional concerned. Subsequently, the network may try to introduce strangers to each other on the basis they are mutual “friends” of that certain professional. In the event she happens to be a mental health counsellor, a divorce attorney or a private detective for instance, the consequences could be grave.

It’s not known how Facebook and other OSNs will respond to the German decision. As Anna Johnston and I wrote in 2011, the quiet collection of people’s details in address books conflicts with basic privacy principles in a great many jurisdictions, not just Germany. The problem has been known for years, so various solutions might be ready to roll out quite quickly. The fix might be as simple in principle as giving proper notice to the people who’s details have been uploaded, before their PII is used by the network. It seems to me that telling people what’s going on like this would, fittingly, be the “social” thing to do.

But the problem from the operators’ commercial points of view is that notices and the like introduce friction, and that’s the enemy of infomopolies. So once again, a major privacy ruling from Europe may see a re-calibration of digital business practices, and some limits placed on the hitherto unrestrained information rush.

 

Photograph (c) Shutterstock


How Stephanie’s broken down car is undermining your privacy

$
0
0

We need to talk about Ben.

Specifically, about Ben Grubb, the tech journo who triggered an on-going legal case, the resolution of which might yet either reinforce or undermine Australia’s privacy laws. (We’ll get onto Stephanie and her troublesome car shortly.)

Actually, we really need to talk about the word ‘about’ – what it means for information to be ‘about’ Ben. Because it is that one little word – about – which has caused such a ruckus.

When is information ‘about’ Ben, and when is it ‘about’ a device or a network?

First, the background. When the Australian Government was preparing in 2013 to introduce mandatory data retention laws, to require telcos to keep ‘metadata’ on their customers for two years in case law enforcement types needed it later, Ben Grubb was curious as to what metadata, such as the geolocation data collected from mobile phones, would actually show. He wanted to replicate the efforts of a German politician, to illustrate the power of geolocation data to reveal insights into not only our movements, but our behaviour, intimate relationships, health concerns or political interests.

While much fun was had replaying the video of the Attorney General’s laughable attempt to explain what metadata actually is, Ben also worked on a seemingly simple premise: “the government can access my Telstra metadata, so why can’t I?

Exercising his rights under what was then NPP 6.1, Ben sought access from his mobile phone service provider, Telstra, for his personal information – namely, “all the metadata information Telstra has stored about my mobile phone service (04…)”.

At the time of his request, the definition of ‘personal information’ was “information or an opinion (including information or an opinion forming part of a database), whether true or not, and whether recorded in a material form or not, about an individual whose identity is apparent, or can reasonably be ascertained, from the information or opinion”.

(Since then, the definition of ‘personal information’ has changed slightly, NPP 6.1 has been replaced by APP 12, and the metadata laws have been passed, including a provision that metadata is to be considered ‘personal information’ under the Privacy Act. Nonetheless, this case has ramifications even under the updated laws.)

Telstra refused access to various sets of information, including location data on the basis that it was not ‘personal information’ subject to NPP 6.1. Ben lodged a complaint with the Australian Privacy Commissioner. While the complaint was ongoing, Telstra handed over a folder of billing information, outgoing call records, and the cell tower location information for Ben’s mobile phone at the time when Ben had originated a call, which is data kept in its billing systems.

What was not provided, and what Telstra continued to argue was not ‘personal information’ and thus need not be provided, included ‘network data’. Telstra argued that that geolocation data – the longitude and latitude of mobile phone towers connected to the customer’s phone at any given time, whether the customer is making a call or not – was not ‘personal information’ about a customer, because on its face the data was anonymous.

The Privacy Commissioner ruled against Telstra on that point in May 2015, finding that a customer’s identity could be linked back to the geolocation data by a process of cross-matching different datasets. Privacy Commissioner Timothy Pilgrim made a determination which found that data which “may” link data to an individual, even if it requires some “cross matching … with other data” in order to do so, is “information … about an individual”, whose identity is ascertainable, meaning “able to be found out by trial, examination or experiment”. The Privacy Commissioner ordered that Telstra hand over the remaining cell tower location information.

Telstra appealed the Privacy Commissioner’s determination, and in December 2015 the Administrative Appeals Tribunal (AAT) found in Telstra’s favour. Now here is where it gets interesting.

We knew that the case would turn on how the definition of ‘personal information’ should be interpreted, and I for one expected that the argument would centre on whether or not Ben was ‘identifiable’ from the network data, including how much cross-matching with other systems or data could be expected to be encompassed within the term ‘can reasonably be ascertained’.

And at first, that looked like how the case was going. The AAT judgment goes into great detail about precisely what data fields are in each of Telstra’s different systems, and what effort is required to link or match them up, and how many people within Telstra have the technical expertise to even do that, and how difficult it might be. But then – nothing. Despite both parties making their arguments on the topic of identifiability, the AAT drew no solid conclusion about whether or not Ben was actually identifiable from the network data in question.

Instead, the AAT veered off-course, into questioning whether the information was even ‘about’ Ben at all. Using the analogy of her own history of car repairs, Deputy President Stephanie Forgie stated:

“A link could be made between the service records and the record kept at reception or other records showing my name and the time at which I had taken the care (sic) in for service. The fact that the information can be traced back to me from the service records or the order form does not, however, change the nature of the information. It is information about the car … or the repairs but not about me”.

The AAT therefore concluded that mobile network data was about connections between mobile devices, rather than “about an individual”, notwithstanding that a known individual triggered the call or data session which caused the connection. Ms Forgie stated:

“Once his call or message was transmitted from the first cell that received it from his mobile device, the data that was generated was directed to delivering the call or message to its intended recipient. That data is no longer about Mr Grubb or the fact that he made a call or sent a message or about the number or address to which he sent it. It is not about the content of the call or the message. The data is all about the way in which Telstra delivers the call or the message. That is not about Mr Grubb. It could be said that the mobile network data relates to the way in which Telstra delivers the service or product for which Mr Grubb pays. That does not make the data information about Mr Grubb. It is information about the service it provides to Mr Grubb but not about him”.

Well. That was a curve ball I did not see coming.

This interpretation seems to conflate object with subject, by suggesting that the primary purpose for which a record was generated is the sole point of reference when determining what that record is ‘about’. In other words, the AAT judgment appears to say that what the information is for also dictates what the information is about.

In my view, this interpretation of ‘about’ is ridiculous. Why can’t information be generated for one reason, but include information ‘about’ something or someone else as well? Why can’t information be ‘about’ both a person and a thing? Or even more than one person and more than one thing?

Even car repair records, which certainly have been created for the primary purpose of dealing with a car rather than a human being, will have information about the car owner. At the very least, the following information might be gleaned from a car repair record: “Jane Citizen, of 10 Smith St Smithfield, tel 0412 123 456, owns a green Holden Commodore rego number ABC 123”.

If we accept the AAT’s view that the car repair record has no information ‘about’ Jane Citizen, then Jane has no privacy rights in relation to that information, and the car repairer has no privacy responsibilities either. If Jane’s home address was disclosed by the car repairer to Jane’s violent ex-husband, she would have no redress. If the car repairer failed to secure their records against loss, and Jane’s rare and valuable car was stolen from her garage as a result, Jane would have no cause for complaint.  Jane won’t even have the right to access the information held by the car repairer, to check that it is correct.

How far could you take this argument? Could banks start arguing that their records are only ‘about’ transactions, not the people sending or receiving money as part of those transactions? Could hospitals claim that medical records are ‘about’ clinical procedures, not their patients? Could retailers claim their loyalty program records are ‘about’ products purchased, not the people making those purchases?

Surely, this is not what Parliament intended in 1988 when our privacy laws were first drafted – or indeed, when they were updated in 2014, when the amendments were claimed to bring Australia’s privacy protection framework into the modern era.

In this era of Big Data, it is the digital breadcrumbs left behind in operational or transactional systems which can yield the business insights with the most value – and are thus in need of privacy protection.

The Privacy Commissioner is appealing the AAT’s decision to the Federal Court. I can only hope the Federal Court can see that information created for an operational purpose might also contain both deliberate and incidental information ‘about’ individuals – individuals who expect their privacy to be protected, no matter how or why the records were created in the first place.

The alternative is to let Stephanie’s broken-down car throw a major spanner in the works of privacy protection in Australia.

 

Photograph (c) Shutterstock

Will the new Transborder principle become an April fool’s joke?

$
0
0

On 1 April, a new Transborder Disclosure principle will commence in NSW. The revised section 19(2) of the Privacy and Personal Information Protection Act 1998 (NSW) (PPIPA), will – if it is interpreted the correct way – raise the bar when public sector agencies across State and local government seek to disclose information outside NSW, including to the Commonwealth government.

But here’s the kicker – if it is interpreted the correct way.

You see, we’ve had a long and messy history of getting this wrong in NSW. So horribly wrong.

I’ve written before about the ‘not in NSW’ loophole. The way that s.19(2) has been interpreted by the ADT and its successor NCAT resulted in a loophole that effectively allowed for information laundering. Just wash your dirty data somewhere outside of NSW, and then you can bring it back inside the borders, without breaching privacy law.

Say you work at a public sector agency in NSW – a government department, a local council or a university – and you want to disclose something that you know you can’t, because it is prohibited by s.18 of PPIPA, which sets the standard for disclosure.

But according to a history of cases in the Tribunal – first the ADT and then its successor NCAT – if the disclosure is going to be made to someone outside NSW, then the normal disclosure rule at s.18 doesn’t apply. You can just ignore it! Instead, the ‘transborder’ rule at s.19(2) applied. Except that, in a historical quirk, s.19(2) has never actually applied in practice.  (For the provision to start applying, it needed a trigger in the form of a Code, which was never made.)  It sat on the statute books, without ever actually coming into force, from 1998 until 1 April this year, when it will finally be replaced.

The effect of this interpretation of the interplay between sections 18 and 19(2) meant that any personal information could be disclosed to anyone, without having to pass any kind of test (like ‘with consent’, or ‘for a directly related secondary purpose’, or ‘for a law enforcement purpose’, etc) – so long as the recipient was outside NSW.

That was clearly an absurd outcome. Not only was s.19(2) failing to set a higher standard for transborder disclosures, it was in fact undermining the normal disclosure rule.

In welcome news, last year the Government finally decided to fix this ridiculous situation. The Privacy and Personal Information Protection Amendment (Exemptions Consolidation) Bill 2015 was duly drafted. (The Bill introduced a number of other changes to PPIPA, but here I’m just interested in the transborder disclosure rule.)

Hooray! Champagne all round. Oh, except, oops … I don’t think they got it quite right. I believe there is still the chance that NCAT will interpret s.19(2) the wrong way, and continue to read it as supplanting, rather than supplementing, the standard disclosure rule at s.18.

Sadly, both the Government and the Opposition missed the opportunity to fix this properly. Here’s how.

First, you need to understand the structure of PPIPA:

Section 18 sets the normal rule for disclosure. You know the drill: don’t disclose personal information unless it is for a routine purpose you notified the person about, or it is for a directly related secondary purpose, or you have the person’s consent, or in an emergency, yada yada.

Section 19 then creates “Special restrictions on disclosure of personal information”. It is split into two parts:

  • 19(1) is about what the Privacy Commissioner has termed ‘sensitive information’ – information about ethnicity, religion, etc – and sets a very high standard for disclosure, which supplants the rule at s.18; while
  • 19(2)-(5) was (and the new s.19(2) will be) about what tends to be called ‘transborder disclosures’, and in my view is intended to supplement, not supplant, the rule at s.18.

The amendment Bill makes no change to s.19(1). It abolishes the old s.19(2)-(5), and replaces it with a new s.19(2).

In the second reading speech of the Bill, the Attorney General Ms Gabrielle Upton stated that the amendment “will impose some additional requirements upon New South Wales public sector agencies when disclosing personal information outside New South Wales, as was originally intended …. This will increase the level of protection for the personal information of New South Wales citizens when it is transferred out of the State”.

Indeed, to reinforce this view, tautology was deloyed in the upper house debate, when the Parliamentary Secretary the Hon David Clarke stated that the new section 19 (2) should be understood as “adding additional requirements” to disclosures of information outside NSW.

So the new transborder provision at s.19(2) is intended to be read as a set of ‘extra’ steps required after you have already satisfied the ‘normal’ disclosure rules at s.18 – which is as it should be. (Think about it – there is no point having a transborder principle at all, unless you want to make it tougher for personal information to leave your own jurisdiction.)

So how it should operate, according to the debate in Parliament (as indeed was, I believe, the original intention in 1998), is that:

  • first, any disclosure of personal information must meet the test at s.18, or an exemption to that rule (for example, the disclosure must be for a directly related secondary purpose, or with consent, or whatever), AND THEN …
  • if the disclosure happens to be heading out of NSW, then it must ALSO meet the test at s.19(2), or an exemption to that rule.

However as I submitted before a Parliamentary Committee a few days prior, and the Greens accepted and hence argued in debate on the Bill in the Legislative Council, that is not necessarily how NCAT will actually interpret the new s.19(2). To date, the Tribunal has twice interpreted s.19(2) as ‘covering the field’ for disclosures outside NSW, meaning that s.18 can be ignored. I see no reason why NCAT Members would suddenly change their view on that – unless an NCAT Member actually reads the Parliamentary debates in detail, and decides to change their interpretation accordingly.

One of the reasons why NCAT has interpreted things this way is because s.19(1) – the rule in relation to ‘sensitive information’ – absolutely SHOULD be read as ‘covering the field’, supplanting rather than supplementing the ‘normal’ rule at s.18.  (The other reason involves understanding Latin – generalia specialibus non derogant anyone?)

This whole problem is because section 19 was poorly drafted in 1998. It is trying to do two different things, requiring two opposite interpretations of how one section should be read in relation to the section immediately before it.

The first half (s.19(1)) is trying to say: “in relation to these special kinds of information (ethnicity, religion etc), please ignore s.18 and INSTEAD do this … ”  However the second half (s.19(2)) is trying to say: “in relation to this special kind of disclosure, please keep following but s.18 BUT ALSO do this …”.

Unfortunately the Greens’ suggestion to add an extra clause, to make it bleedingly obvious to NCAT how s.19(1) versus s.19(2) should be interpreted, was not adopted. I think therefore there remains a risk that NCAT will continue to read s.19(2) as being the only rule in relation to transborder disclosures, instead of an extra rule.

I understand that the reason given by the Government for not accepting the Greens’ proposal was so as to enable consistency in drafting between the transborder principles in PPIPA and HRIPA. I don’t think that point is valid. HRIPA does not suffer from the same interpretation problems as PPIPA. (Those of us involved in drafting HRIPA in 2002 learned from the mistakes made in 1998!) HRIPA uses the language of ‘transfer’ in its transborder principle (HPP 14), not ‘disclosure’, so it is already clearer to see that HPP 14 is not supposed to supplant the ‘disclosure’ principle (HPP 11), because it is regulating a slightly different type of conduct anyway.

So … if NCAT does not change its position on how to interpret s.19, I believe that the standard for disclosures heading outside NSW will continue to be weaker than for disclosures made inside NSW.

For example, if NCAT maintains its past position that s.18 does not apply to disclosure if the personal information is heading outside NSW, all an agency needs to do in order to disclose personal information to, say, an agency in Victoria or a business in Singapore is say that the disclosure is “necessary for the performance of a contract between the individual and the public sector agency”.  Or they could say “oh, we reasonably believe that the recipient is subject to a law that upholds similar privacy principles” (but note there is nothing in that rule to require that that interstate or foreign law must be capable of actually providing an enforceable remedy to the NSW victim of a privacy breach).  Or the disclosing agency could bind the recipient by way of contract to comply with the same standards as NSW; there are multiple ways to comply with the transborder disclosure rule.

Don’t get me wrong.  I think Parliament’s intention is clear from the debates on the Bill, that s.19(1) should supplant s.18, while s.19(2) should supplement, but not supplant, s.18.  Any disclosures of personal information heading outside NSW must first meet the standard test for disclosures in s.18, and then also meet the extra test for transborder disclosures at s.19(2).  Indeed, we have written our new guide to untangling the disclosure rules on the basis of the Government’s statements in Parliament about how s.19(2) should be applied.

But the proof of the pudding will be whether NCAT also sees it that way.  So far at least one law firm seems to have interpreted the amendment as meaning that s.18 doesn’t apply to transborder disclosures, and that the amendment Bill therefore has a permissive effect, opening up the way for easier disclosures on very broad grounds – the opposite outcome to the ‘extra privacy protection’ the Government was aiming for.  (Update, 7 March: I understand the NSW Privacy Commissioner is in the process of drafting guidance material on the new s.19(2), which may help to guide interpretation.)

I remain of the belief that a further, simple amendment to s.19 could best guide interpretation of the law – something along the lines of: “Subsection (2) is in addition to the requirements of subsection (1) and section 18”.  That’s all it would take to fix this mess.

Frustratingly, the passage of the Bill was a missed opportunity to properly fix a problem that was acknowledged by all sides in Parliament. There might have been a better outcome if there had been public consultation about the Bill beforehand, or if either the Government or the Opposition had been more willing to slow down and listen during debate.

If we can’t even get minor amendments right, NSW privacy laws will remain a laughing stock.

 

Photo (c) Shutterstock

This blog was updated on 7 March 2016.

Why you might want to become a Jedi Knight for this year’s Census

$
0
0

In the week before Christmas last year, the Australian Bureau of Statistics quietly trashed your privacy. We have only a few months to claim it back.

In December 2015, the ABS announced its plans to collect and keep the name and address of every person in Australia, starting with the August 2016 census. And to then use your name and address, to link your census answers to other sets of data, like health and educational records, so that the ABS can develop “a richer and dynamic statistical picture of Australia through the combination of Census data with other survey and administrative data”.

That’s right – census data could be linked to health records too. So that the ABS can do things like “(understand) and support … people who require mental health services”.

This proposal represents the most significant and intrusive collection of identifiable data about you, me, and every other Australian, that has ever been attempted. It will allow the ABS to build up, over time, a rich and deep picture of every Australian’s life, in an identifiable form.

Up until now, the name and address portion of census forms was not retained by the ABS; just as soon as the rest of your census answers were transcribed, the paper forms were destroyed.

But the new proposal is to keep name and address, as well as your answers to all the Census questions included this year, such as sex, age, marital status, indigenous status, religious affiliation, income, education level, ancestry, language spoken at home, occupation, work address, previous home address, vehicles garaged at your address, and the relationships between people living in the same home.

Statements from the ABS which trivialise the risks posed by stripping away census anonymity have missed the point. Seeking to justify the proposal by saying that the ABS will never release identifiable information ignores the point that they shouldn’t have it in the first place. And, as my mother taught me – you shouldn’t make promises you cannot keep.

The risks include leaks from corrupted ABS staff, or organised criminals who wish to perpetrate identity theft and fraud by hacking into the database. The ABS is not magically immune to the risk of data breaches. It was only last year that one of their staff was convicted of leaking data to a friend at the NAB as part of a multi-million dollar insider trading scam.

Blithe reassurances about the security of census information ring hollow as we have seen the slow but steady fallout from so many recent data security breaches, from the Ashley Maddison hack to the Department of Immigration’s bungle which saw 9,250 asylum seekers’ details published online. Whether from external hackers, deliberate misuse by ABS staff or negligent losses of data, the only way to prevent data breaches from occurring is to not hold the information in the first place.

Of even more concern is the temptation posed for the Government of a centralised population dataset, just within its reach. How simple it would be for the federal police or ASIO to require the ABS to hand over details of all Muslim men. Or for Centrelink to demand to know just who is living with whom on what income, while claiming welfare benefits. This is the greatest potential impact of the proposal – that the ABS becomes the unwitting tool of a Government intent on mass population surveillance.

The ABS’s own privacy review noted that it faces the risk of what’s known as function creep: that in the future, “name and address information from responses to the 2016 Census may be used for purposes beyond what is currently contemplated by the ABS”. In what seems a fairly breath-taking degree of naivety, the ABS decided that the risk of this happening is “very low”, but that if it did, its response would be to review internal protocols and “consult affected stakeholders”.

The statisticians must be living in fantasy land if they think that once they hold identifiable data on all 24 million people in Australia, that not a single government department, Minister or police force will be interested in tapping into that data for their own, non-research purposes. Just look at the agencies queueing up to get their hands on the metadata that telecommunications companies must now keep by law.

And in the event that a Trump-esque leader demands that the ABS hand over the names and addresses of all Muslims living in Australia (as US census data was used to round up and imprison Japanese-Americans in World War II), how is a review of internal protocols, or consultation with stakeholders, going to fix things?

The only way to prevent function creep is to not hold the information in the first place.

A further privacy risk is re-identification from joined-up data. Even if names and addresses are used only for linking purposes – that is, to link your census answers with information about you from another dataset (such as health or education records), and then stripped out again – the added richness of combined datasets makes it easier to re-identify individuals. Disturbingly, the ABS’s privacy review did not even consider this risk of re-identification, also known as “statistical disclosure risk”. Nor did the concept of Big Data even rate a mention. If our chief statisticians are not calculating the statistical disclosure risk of their own proposal, we are all in trouble.

The only way to prevent re-identification from joined-up datasets is to not link them in the first place.

This proposal represents a massive breach of public trust, and shifts all of the privacy risks onto us, the people of Australia.

But it also carries enormous operational risks for governments, businesses, non-profits and community groups, which each rely on census data for evidence-based decision-making. Research tells us that when people do not trust a data collection, significant numbers of people will simply provide misinformation. Surveys conducted periodically by the Office of the Australian Privacy Commissioner found that around three in ten people stated that they had falsified their name or other details in order to protect their privacy when using websites in 2013; this figure was a jump from 25% in 2007.

In 2001, the ABS were worried enough about the impact on the integrity of census data to try and avoid a joke doing the rounds that people should list their religion on the census form as ‘Jedi knight’. Their response was eminently sensible, pointing out that the accuracy of census data is important for all Australians, as it impacts on decision-making across all aspects of our lives: from where to draw electoral boundaries, to the building of schools and hospitals, and the routing of local buses. Further, the question about religion is the only optional question on the census; so if you object to being asked about religion, you can simply not answer it, without risking criminal penalties.

Nonetheless, in the 2001 census results, just over 73,000 people described themselves as Jedi, which is more people than identified as Salvation Army or Seventh Day Adventists, and only slightly fewer than those who listed their religion as Judaism.

If census data can be so easily skewed by a bunch of Star Wars fans, the potential impact of enough people being sufficiently concerned about safeguarding their privacy to contemplate providing inaccurate responses, or not responding at all, should surely make the ABS think twice about this proposal.

And what happens to other nationally-important data collections that don’t have the force of law behind them? The ABS’s review did not consider how a loss of public trust in the census might impact on some people’s willingness to accept or embrace other government projects, such as the new My Health Record, if they fear the linking of that data with their census records.

I am surprised that the many stakeholders who seek to use census data, or indeed the agencies which run any other major government programs, are apparently willing to risk the integrity of the data on which they rely. Or perhaps, like the rest of us, they were too busy in the week before Christmas to notice that our privacy protections were being wrenched away.

The ABS’s privacy review noted that it faces the risk that this proposal “may cause public concern which results in a reduction of participation levels in ABS collections, and/or a public backlash”. Its suggestions for mitigating that risk are mostly focused on PR efforts to calm us all down, but it also says that the ABS will “reconsider the privacy design for the proposal, if required”.

Which means that there is still hope, that with enough public pressure, the ABS itself – or at least the governments, businesses and charities which care about the reliability of census data – will see this proposal for the folly it is, and return to a census format designed to ensure both the integrity of our data, and the protection of our privacy.

 

Photo (c) Shutterstock

Cash for data? Ownership of personal information not a solution

$
0
0

World Wide Web inventor Sir Tim Berners-Lee has given a speech in London, re-affirming the importance of privacy, but unfortunately he has muddied the waters by casting aspersions on privacy law. Berners-Lee makes a technologist’s error, calling for unworkable new privacy mechanisms where none in fact are warranted.

The Telegraph reports Berners-Lee as saying “Some people say privacy is dead – get over it. I don’t agree with that. The idea that privacy is dead is hopeless and sad.” He highlighted that peoples’ participation in potentially beneficial programs like e-health is hampered by a lack of trust, and a sense that spying online is constant.

Of course he’s right about that. Yet he seems to underestimate the data privacy protections we already have. Instead he envisions “a world in which I have control of my data. I can sell it to you and we can negotiate a price, but more importantly I will have legal ownership of all the data about me” he said according to The Telegraph.

It’s a classic case of being careful what you ask for, in case you get it. What would control over “all data about you” look like? Most of the data about us these days – most of the personal data, aka Personally Identifiable Information (PII) – is collected or created behind our backs, by increasingly sophisticated algorithms. Now, people certainly don’t know enough about these processes in general, and in too few cases are they given a proper opportunity to opt in to Big Data processes. Better notice and consent mechanisms are needed for sure, but I don’t see that ownership could fix a privacy problem.

What could “ownership” of data even mean? If personal information has been gathered by a business process, or created by clever proprietary algorithms, we get into obvious debates over intellectual property. Look at medical records: in Australia and I suspect elsewhere, it is understood that doctors legally own the medical records about a patient, but that patients have rights to access the contents. The interpretation of medical tests is regarded as the intellectual property of the healthcare professional.

The philosophical and legal quandries are many. With data that is only potentially identifiable, at what point would ownership flip from the data’s creator to the individual to whom it applies? What if data applies to more than one person, as in household electricity records, or, more seriously, DNA?

What really matters is preventing the exploitation of people through data about them. Privacy (or, strictly speaking, data protection) is fundamentally about restraint. When an organisation knows you, they should be restrained in what they can do with that knowledge, and not use it against your interests. And thus, in over 100 countries, we see legislated privacy principles which require that organisations only collect the PII they really need for stated purposes, that PII collected for one reason not be re-purposed for others, that people are made reasonably aware of what’s going on with their PII, and so on.

Berners-Lee alluded to the privacy threats of Big Data, and he’s absolutely right. But I point out that existing privacy law can substantially deal with Big Data. It’s not necessary to make new and novel laws about data ownership. When an algorithm works out something about you, such as your risk of developing diabetes, without you having to fill out a questionnaire, then that process has collected PII, albeit indirectly. Technology-neutral privacy laws don’t care about the method of collection or creation of PII. Synthetic personal data, collected as it were algorithmically, is treated by the law in the same way as data gathered overtly. An example of this principle is found in the successful European legal action against Facebook for automatic tag suggestions, in which biometric facial recognition algorithms identify people in photos without consent.

Technologists often under-estimate the powers of existing broadly framed privacy laws, doubtless because technology neutrality is not their regular stance. It is perhaps surprising, yet gratifying, that conventional privacy laws treat new technologies like Big Data and the Internet of Things as merely potential new sources of personal information. If brand new algorithms give businesses the power to read the minds of shoppers or social network users, then those businesses are limited in law as to what they can do with that information, just as if they had collected it in person. Which is surely what regular people expect.

 

Photo (c) Shutterstock

Woolly thinking & knotty problems: how to untangle the Disclosure rules

$
0
0

I think our privacy laws are too tough.  (Collective gasp!  An avowed champion of privacy rights thinks the laws are too tough??)

Wait!  No!  I should clarify, before you think I have lost my mind and gone over to the dark side.

No, I think our laws are too tough to understand, and therefore too hard to comply with.  So as a result, we probably don’t have great compliance.

Generally, I believe our laws tend to manage the delicate balancing act between competing public interests, like privacy and medical research, or privacy and law enforcement.  But in the expression of that balancing act there are so many permutations, double negatives and sub-clauses of sub-clauses that it can make your brain hurt when you try to figure out exactly what the correct rule is.

I’ve written before about the black holes in NSW privacy law, but today I’m concentrating on the convoluted drafting instead.  Unfortunately, much of the problem comes from woolly thinking when amendments are tacked on without much thought for the coherence of the legislation as a whole.

Did you know, for example, that in NSW privacy law there are thirteen differently-phrased exemptions relating to disclosures for law enforcement and investigations alone?  Some rules only cover health information; some cover personal information but not ‘sensitive information’; some cover transborder disclosures, but others don’t.

Here’s a flavour of the subtle differences.

  • One rule for health information is if the disclosure “is reasonably necessary for the exercise of law enforcement functions by law enforcement agencies in circumstances where there are reasonable grounds to believe that an offence may have been, or may be, committed”.
  • The equivalent rule for ‘sensitive information’ (ethnicity, religion etc) is if the disclosure is “reasonably necessary for the purposes of law enforcement in circumstances where there are reasonable grounds to believe that an offence may have been, or may be, committed”.
  • And the equivalent rule for all the other types of personal information is if the disclosure is “reasonably necessary … in order to investigate an offence where there are reasonable grounds to believe that an offence may have been committed”.

Why?  Why should there be three differently-worded standards for what is essentially the same public interest ground exemption?  Because they were drafted in different decades (1998, 2002 and 2015, to be precise), without much thought to each other, that’s why.  Yet agencies need to ensure their internal protocols reflect all three different tests.  As a result, too much privacy compliance effort is expended on drafting complicated documents, instead of proactive strategies to deliver better privacy outcomes.

The woolly thinking that goes into rushed drafting, without considering the bigger picture, then leads to further anomalies.  For instance, in 2015 an amendment Bill introduced both a new ‘law enforcement’ exemption and a ‘research’ exemption, including from the transborder disclosure rule for non-health personal information.  Yet there is no equivalent provision allowing the transborder disclosure of health information for either research or law enforcement reasons.  So too bad if the research project to help cure cancer is a national endeavour.

This piecemeal approach to drafting – always tinkering and adding, never actually fixing – has also led to different language applying to what should be common concepts like consent.  Some sections demand express consent, while others suggest consent could be inferred.  Sometimes the thing that must be consented to is an act (e.g. a particular disclosure), while other times it is a state of being (i.e. the state of being non-compliant with a particular rule).

Another example is the ‘emergency scenario’ exemption.  In relation to disclosing health information, an organisation needs to “reasonably believe” their disclosure to be necessary; in relation to non-health personal information – but excluding ‘sensitive information’ – they have to “believe on reasonable grounds” that the disclosure is necessary; and for ‘sensitive information’, the disclosure must actually be necessary.

Necessary for what?  There the language differs again.  Sometimes it is “to lessen or prevent” a threat.  Other times it is “to prevent or lessen”.  And sometimes just “to prevent”.

And what threats are we talking about?  One rule says “a serious and imminent threat to the life or health of the individual concerned or another person”.  Another has the same test, but applies it to life, health “or safety”.  A third refers to “a serious threat to public health or public safety”.

Anyone else feel exhausted by the mental gymnastics needed to cope with the differences between what should be three identical rules?  (Or even better – just one rule!)  If the privacy laws were easier to follow, then compliance would be easier, and organisations could focus on delivering better privacy outcomes.

Here at Salinger Privacy we can’t reduce the red tape, but we have come up with a way to untangle the knots for you.

Our new guide, Untangling Privacy, is designed to help you quickly navigate your way through the NSW privacy laws. It is relevant for private sector organisations and State-owned corporations in NSW which are regulated by HRIPA, and NSW public sector agencies (including universities and local councils) regulated by both PPIPA and HRIPA.

The guide offers a set of visual flowcharts, with yes/no answers determining your path, to quickly guide you through the NSW Disclosure principles – and all the convoluted exemptions to those principles.  It reflects all the amendments to PPIPA which commenced in 2016, including the new ‘transborder’ rule.

So now you can untangle the knotty legislative rules to quickly figure out the answer to the question: Can we disclose this?

 

Photo (c) Shutterstock

Viewing all 79 articles
Browse latest View live




Latest Images