Constraining social media use

Alie Ward’s Ologies postcast about gratitude was a reminder of the benefits of in-person activities and the problems which arise from the incentives of social media firms. Like casinos that profit mostly from people mindlessly putting money into slot machines, platforms like Facebook and Twitter are just designed to keep people on and coming back, no matter whether they become misinformed through the process. In response, I changed my Facebook, Twitter, and Instagram passwords on December 14th and put them on a card at home to look up if I ever specifically decided to check these platforms. I’ve done so a couple of times since and had the strong impression that I haven’t missed anything.

One reason for using these platforms less is how ongoing social media monitoring is dragging out the completion of my dissertation, since there are always developments and new news on divestment. It’s better to get the thing published than to keep dragging it out with new information, so I am no longer actively monitoring social media.

Secondly, during the time surrounding America’s disastrous election (still a disaster, even though Trump lost) I realized that I don’t need endless amateur commentary on what is going on, and that getting it is needlessly emotionally provocative.

I took Twitter off my phone in 2017 but this is much more complete. In particular, it helps break a cycle of checking social media out of habit, seeing links to outside resources, and then getting caught up with reading them before returning to social media.

I am trying to read more books now, and to hike outside.

Designing stoppable AIs

Some time ago I saw this instructive video on computer science and artificial intelligence:

This recent Vanity Fair article touches on some of the same questions, namely how you design a safety shutdown switch that the AI won’t trigger itself and won’t stop you from triggering. It quites Eliezer Yudkowsky:

“How do you encode the goal functions of an A.I. such that it has an Off switch and it wants there to be an Off switch and it won’t try to eliminate the Off switch and it will let you press the Off switch, but it won’t jump ahead and press the Off switch itself? … And if it self-modifies, will it self-modify in such a way as to keep the Off switch? We’re trying to work on that. It’s not easy.”

I certainly don’t have any answers, and in fact find it a bit surprising and counterintuitive that the problem is so hard.

The logic does fairly quickly become straightforward though. Imagine an AI designed to boil a tea kettle unless the emergency stop is pushed. If it is programmed to care more about starting the kettle than paying attention to the shutdown switch, then it will choose to boil water regardless of attempts at shutdown, or even to try to stop a person from using the switch. If it is programmed to value obeying the shutdown switch more then it becomes presented with the temptation to push the switch itself and thus achieve a higher value goal.

Cyber defences create their own risks

In addition to aforementioned rules about internet and computer security (1, 2, 3, 4) it’s worth mentioning that security measures can create their own vulnerabilities.

That’s true in terms of human systems. For instance, granting high-level powers to system administrators creates risks that they will exploit them deliberately or have their credentials stolen, or simply used after being left unguarded.

It’s can also be true for technical means. For instance, people often misunderstand TOR and believe that it makes everything about their web browsing anonymous. Really, it just routes the traffic several times within an encrypted network to disguise the origin before using an exit node to communicate with the target server, potentially with no encryption. Since people may be more likely to use TOR for sensitive or illicit purposes, those exit nodes are likely a target for both freelancers and governments.

Some recent stories have alleged that the virtual private networks (VPNs) which people use to protect themselves from an untrusted local network can create risks as well:

Earlier, people alleged that Facebook was using its Onavo VPN to snoop on users.

Discrimination by artificial intelligence

I have seen numerous accounts of how — when an artificial intelligence or machine learning system is given a human resource task in the hope it won’t perpetuate human biases — biases in the material used to train the AI lead to it replicating the discrimination. As The Economist recently noted, this can happen even when information on things like the sex and race of applicants isn’t directly provided, since it can be inferred from other features in the data:

Such deficiencies are, at least in theory, straightforward to fix (IBM offered a more representative dataset for anyone to use). Other sources of bias can be trickier to remove. In 2017 Amazon abandoned a recruitment project designed to hunt through CVs to identify suitable candidates when the system was found to be favouring male applicants. The post mortem revealed a circular, self-reinforcing problem. The system had been trained on the CVs of previous successful applicants to the firm. But since the tech workforce is already mostly male, a system trained on historical data will latch onto maleness as a strong predictor of suitability.

Humans can try to forbid such inferences, says Fabrice Ciais, who runs PWC’s machine-learning team in Britain (and Amazon tried to do exactly that). In many cases they are required to: in most rich countries employers cannot hire on the basis of factors such as sex, age or race. But algorithms can outsmart their human masters by using proxy variables to reconstruct the forbidden information, says Mr Ciais. Everything from hobbies to previous jobs to area codes in telephone numbers could contain hints that an applicant is likely to be female, or young, or from an ethnic minority.

In part this is a subset of the black box problem in AI. For example, an AI intended to distinguish dogs from wolves learned to work out which photos had snow in them instead. Since the output from AIs is a set of tuned probabilities, it’s not possible to say what chain of reasoning or source of evidence led them to a conclusion; at the same time, this creates a risk that they will behave in unpredictable and unwanted ways.

Government and law enforcement back doors

One computer security concern is that various insiders — including hardware and software manufacturers, and governments which may compel them to comply — will build back doors into their products to allow the security to be compromised.

Doing this is a terrible idea. A back door put in for government surveillance or police use is also vulnerable to use for any purpose by anyone who discovers it. There’s no way to create strong encryption and security against everyone except the government, so building in back doors means deliberately spreading insecure systems throughout your society. When you deliberately design your systems to be vulnerable to one attacker (however well-motivated and regulated) you inevitably create an attack vector for an unauthorized person. You also face vulnerability if the mechanism of the backdoor is reverse engineered by unregulated agents, like criminal groups or foreign governments. With the degree of espionage focused in high-tech industry, it’s hard to imagine that any government could keep their back door strictly for their own use when well-resourced and determined opponents would also achieve many objectives through access.

The latest high-profile example of such a back door is the revelation that Swiss cryptography firm Crypto AG was secretly owned by the CIA. There have been numerous recent news stories, but the same information was reported in 1995. The National Security Archive has some further context.

Related:

Unicode characters are spoiling my LaTeX bibliography and I cannot find them

I was being driven a little up the wall by biblatex rendering errors which referred to Unicode characters within my .bib database.

First I learned that the degree-like symbol you get from typing option + 0 in Mac OS is actually the “Masculine Ordinal Indicator” and you should use Option + Shift + 8 for a degree symbol.

That’s not much help though, since degree signs in your .bib file will still cause problems for creating a bibliography. Instead you need to put in titles like “Global warming of 1.5 \textdegree\ C” which almost renders properly, with the only problem the inclusion of a space between ° and C.

Much more annoying was one ‘ZERO WIDTH NON-JOINER’ (U+200C) which snuck into my .bib file. The error logs don’t say what line it is on, and the character is invisible in TextMate. After trying a bunch of ineffective suggestions on various web forums, I found one that referenced this Unicode converter. Throw in your bibliography and it will tell you the contents in Unicode terms character by character and let you find anything which is yielding errors.

Amateur radio

While finishing my dissertation remains my top priority, I also signed up for an amateur radio course being offered by the Toronto Amateur Radio Club.

It’s something like ten 2-hour instructional sessions, followed by the federal government exam to get a basic certification and call sign.

It should be an interesting way to spend a couple of hours on Monday nights, provide a useful life skill, and grant an opportunity to meet another community of nerds.