People reluctant to use self-driving cars, survey shows

Autonomous vehicles are going to save us from traffic, emissions, and inefficient models of car ownership. But while songs of praise for self-driving cars are regularly sung in Silicon Valley, does the public really want them?

That’s what my student Charlie Hewitt, and collaborators Ioannis Politis and Theocharis Amanatidis set out to study. We decided to conduct a public opinion survey to find out.

However, we first had to solve two problems.

  1. When Charlie started his work, there were no existing surveys designed specifically around autonomous vehicles. We had some surveys for technology acceptance in general, and some for cars, which are a good start. So we combined those and added some additional information. This resulted in the creation of a new survey designed specifically for autonomous vehicles. We called it the Autonomous Vehicle Acceptance Model, or AVAM for short.
  2. When people think of self-driving cars, they generally picture a futuristic pod with no steering wheel or controls that they just step into and get magically transported to their destination. However, the auto industry differentiates between six levels of autonomy. Previous studies had attempted to get people’s attitudes to each of these levels, but it turns out people can’t picture these different levels of autonomy very well, and don’t understand how they differ. So, Charlie created short descriptions to explain the differences between them. These vignettes are a key part of the AVAM, because they help the general public understand the implications of different levels of autonomy.

Here are the six levels of autonomous vehicles as described in our survey:

  • Level 0: No Driving Automation. Your car requires you to fully control steering, acceleration/deceleration and gear changes at all times while driving. No autonomous functionality is present.
  • Level 1: Driver Assistance. Your car requires you to control steering and acceleration/deceleration on most roads. On large, multi-lane highways the vehicle is equipped with cruise-control which can maintain your desired speed, or match the speed of the vehicle to that of the vehicle in front, autonomously. You are required to maintain control of the steering at all times.
  • Level 2: Partial Driving Automation. Your car requires you to control steering and  acceleration/deceleration on most roads. On large, multi-lane highways the vehicle is equipped with cruise-control which can maintain your desired speed, or match the speed of the vehicle to that of the vehicle in front, autonomously. The car can also follow the highway’s lane markings and change between lanes autonomously, but may require you to retake control with little or no warning in emergency situations.
  • Level 3: Conditional Driving Automation. Your car can drive partially autonomously on large, multi-lane highways. You must manually steer and accelerate/decelerate when on minor roads, but upon entering a highway the car can take control and steer, accelerate/decelerate and switch lanes as appropriate. The car is aware of potential emergency situations, but if it encounters a confusing situation which it cannot handle autonomously then you will be alerted and must retake control within a few seconds. Upon reaching the exit of the highway the car indicates that you must retake control of the steering and speed control.
  • Level 4: High Driving Automation. Your car can drive fully autonomously only on large, multi-lane highways. You must manually steer and accelerate/decelerate when on minor roads, but upon entering a highway the car can take full control and can steer, accelerate/decelerate and switch lanes as appropriate. The car does not rely on your input at all while on the highway. Upon reaching the exit of the highway the car indicates that you must retake control of the steering and speed control.
  • Level 5: Full Driving Automation. Your car is fully autonomous. You are able to get into the car and instruct it where you would like to travel to, the car then carries out your desired route with no further interaction required from you. There are no steering or speed controls as driving occurs without any interaction from you.

Before you read on, think about each of those levels. What do you think are the advantages and disadvantages of each? Which would you be comfortable with and why?

We sent our survey to 187 drivers recruited from across the USA, and here’s what we found:

Result 1: our respondents were not ready to accept autonomous vehicles.

We found that on many measures, people report a lower acceptance of higher automation levels. People perceive higher autonomy levels as being less safe, they report lower intent to use them, and higher anxiety with higher autonomy levels.

We compared some of the results with those from an earlier study, conducted in 2014. We had to make some simplifying assumptions, as the 2014 study wasn’t conducted with the AVAM. However, we still found that our results were mostly similar: both studies found that people (unsurprisingly) expected to have to do less as the level of autonomy increased. Both studies also found that people showed lower intent to use higher autonomy vehicles, and poorer general attitude towards higher autonomy. Self-driving cars seem to be suffering in public opinion!

Result 2: the biggest leap in user perception comes with full autonomy.

We asked people how much they would expect to have to use their hands, feet and eyes while using a vehicle at each level of autonomy. Even though vehicles at the intermediate levels of autonomy (3 and 4) can do significantly more than levels 1 and 2, people did not perceive the higher levels as requiring significantly less engagement. However, at level 5 (full autonomy), there was a dramatic drop in expected engagement. This was an interesting and new finding (albeit not entirely surprising). One explanation for this is that people only really perceive two levels of autonomy: partial and full, and don’t really care about the minor differences in experience with different levels of partial autonomy.

All in all, we were fascinated to learn about people’s attitudes to self-driving cars. Despite the enthusiasm displayed by the tech media, there seems to be a consistent concern around their safety and reluctance to adopt amongst the general public. Even if self-driving cars really do end up being safer and better in many other ways than regular cars, automakers will still face this challenge of public perception.

And now, a summary poem:

The iron beast has come alive,
We do not want it, do not want it
Its promises we do not prize
It does not do as we see fit

Only when we can rely
On iron beast with its own eye
Only then will we concede
And disaffection yield to need

If you’re interested in using our questionnaire or our data, please reach out! I’d love to help you build on our research.

Want to learn more about our study? Read it here (click to download PDF) or see the publication details below:

Charlie Hewitt, Ioannis Politis, Theocharis Amanatidis, and Advait Sarkar. 2019. Assessing public perception of self-driving cars: the autonomous vehicle acceptance model. In Proceedings of the 24th International Conference on Intelligent User Interfaces (IUI ’19). ACM, New York, NY, USA, 518-527. DOI: https://doi.org/10.1145/3301275.3302268

Ask people to order things, not score them

Ever graded an essay? Given scores to interview candidates? Given a rating to an item on Amazon? Liked a video on YouTube?

We’re constantly asked to rate or score things on absolute scales. It’s convenient: you only have to look at each thing once to give it a score, and once you’ve got a set of things all reduced to a single number, you can compare them, group them into categories, and find the best one (and the worst).

However, a growing body of evidence points to the fact that humans are simply not very good at giving absolute scores to things. By not very good, we mean there are two problems:

  • Different people give different scores to the same thing (low inter-rater reliability)
  • The same person can give different scores to the same thing, when asked to score it repeatedly (low intra-rater reliability)

But don’t worry! There’s a better way: ordering things, not scoring them. Let me illustrate with two case studies.

Making complex text easier to read

A cool modern application of artificial intelligence / machine learning is “lexical simplification”, which is an ironically fancy way of saying “making complex text easier to read by substituting complex words with simpler synonyms”. This is a great way to make text accessible to young readers and those not fluent in the language. Finding synonyms for words is easy, but detecting which words in a sentence are “complex” is hard.

To teach the AI system what counts as a complex word and what doesn’t, we need to give it a bunch of labelled training examples. That is, a list of words that have already been labelled by humans as being complex or not. Now traditionally, this dataset was generated by giving human labellers some text, and asking them to select the complex words in that text. This is a simple scoring system: every word is scored either 1 or 0, depending on whether the word is complex or not.

However, we knew from previous research that people are inconsistent in giving these absolute scores. So, my student Sian Gooding set out to see if we could do better. She conducted an experiment where half the participants used the old labelling system, and the other half used a sorting system. In the sorting system, participants were given some text, and asked to order the words in that text from least to most complex.

We found that with the sorting system, participants were far more consistent and created a far better labelled training set!

Helping clinicians assess multiple sclerosis

The Microsoft ASSESS-MS project aimed to use the Kinect camera (which captures depth information as well as regular video) to assess the progression of multiple sclerosis. The idea is that because ­­­MS causes degeneration of motor function that manifests in movements such as tremor, it should be possible to use computer vision to track and understands a patient’s movements with the Kinect camera, and assign them a score corresponding to the severity of their illness.

To train the system, we first needed a set of labelled training videos. That is, videos of patients for which neurologists had already provided the severity of illness scores. The problem was that the clinicians were giving scores on a standardised medical scale of 0 to 4, but their scores were suffering from poor consistency! With inconsistent scores, there was little hope that the computer vision system would learn anything.

The video illustrates our deck sorting interface for clinicians

Our solution was to ask clinicians to sort sets of patient videos. We found that giving clinicians “decks” of about 8 videos to sort in order of illness severity worked well – any more than that and the task became too challenging. But we wanted them to rate nearly 400 videos. To go from orderings of 8 videos at a time, to a full set of orderings for the entire dataset, we needed an additional step. For this, we used the TrueSkill algorithm, which is able to merge the results from many orderings (how exactly we did this is detailed in our paper, which you can read here (PDF)).

To our amazement, we found that the resulting scores were significantly more consistent than anything we had previously measured, and handily exceeded clinical gold standards for consistency.

But why does it work?

It’s not yet clear why people are so much better at ordering than scoring. One hypothesis is that it requires people to provide less information. When you score something on a scale of 1-10, you have 10 choices for your answer. But when you compare two items A and B, you only have 3 choices: is A less than B, or is B less than A, or are they equal? However, this hypothesis doesn’t explain what Sian and I saw in the word complexity experiment, since in the scoring condition, users were only assigning scores of 0 or 1. Another hypothesis is that considering how multiple items relate to each other gives people multiple reference points, leading to better decisions. More research is required to test these hypotheses.

In conclusion

People are asked to score things on absolute scales all the time, but they’re not very good at it. We’ve shown that people are significantly better at ordering things in a variety of domains, including identifying complex words, and assessing multiple sclerosis, although we’re not quite sure why.

The next time you find yourself assigning absolute scores to things – try ordering them instead. You might be surprised at the clarity and consistency it brings!

And now, a summary poem:

I wished to know the truth about this choice
And with no guide I found myself adrift
No measure, no register, no voice
But when juxtaposed with others,
brought resolution swift.

Black and white, true and false, desire:
Nature makes a myriad form of each.
Context drives our understanding higher,
To compare things brings them well within our reach.

Want to learn more about our studies? See the publication details below:

Sarkar, Advait, Cecily Morrison, Jonas F. Dorn, Rishi Bedi, Saskia Steinheimer, Jacques Boisvert, Jessica Burggraaff et al. “Setwise comparison: Consistent, scalable, continuum labels for computer vision.” In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, pp. 261-271. ACM, 2016. https://doi.org/10.1145/2858036.2858199. Download PDF

Gooding, Sian, Ekaterina Kochmar, Alan Blackwell, and Advait Sarkar. “Comparative judgments are more consistent than binary classification for labelling word complexity.” In Proceedings of the 13th Linguistic Annotation Workshop, pp. 208-214. 2019. https://doi.org/10.18653/v1/W19-4024. Download PDF

Steinheimer, Saskia, Jonas F. Dorn, Cecily Morrison, Advait Sarkar, Marcus D’Souza, Jacques Boisvert, Rishi Bedi et al. “Setwise comparison: efficient fine-grained rating of movement videos using algorithmic support–a proof of concept study.” Disability and rehabilitation (2019): 1-7. https://doi.org/10.1080/09638288.2018.1563832

Human language isn’t the best way to chat with Siri or Alexa, probably

The year is 2019. Voice-controlled digital assistants are great at simple commands such as “set a timer…” and “what’s the weather?”, but frustratingly little else.

Human language seems to be an ideal interface for computer systems; it is infinitely flexible and the user already knows how to use it! But there are drawbacks. Computer systems that aim to understand arbitrary language are really hard to build, and they also create unrealistic expectations of what the system can do, resulting in user confusion and disappointment.

The next frontier for voice assistants is complex dialogue in challenging domains such as managing schedules, analysing data, and controlling robots. The next generation of systems must learn to map ambiguous human language to precise computer instructions. The mismatch between user expectations and system capabilities is only worsened in these scenarios.

What if we could preserve the familiarity of natural language, while better managing user expectations and simplifying the system to boot? That’s exactly what my student Jesse Mu set out to study. The idea was to use what we called a restricted language interface, one that is a well-chosen subset of full natural language.

Jesse designed an experiment where participants played an interactive computer game called SHRDLURN. In this game, the player is given a set of blocks of different colours, and a “goal”, which is the winning arrangement of blocks. The player types instructions to the computer such as “remove the red blocks” and the computer tries to execute the instruction. The interesting bit is that the computer doesn’t understand language to begin with. In response to a player instruction, it presents the player with a list of block arrangements, and the player picks the arrangement that fits their instructions. Over time, the computer learns to associate instructions with the correct moves, and the correct configuration starts appearing higher up in the list. The system is perfectly trained when the first guess on its list is always the one the player intended.

description
The figure above shows some example levels from the game. How would you instruct a computer to go from the start to the goal?

Sixteen participants took part in our experiment. Half of them played the game with no restriction, but the other half were given specific instructions: they were only allowed to use the following 11 words: all, cyan, red, brown, orange, except, leftmost, rightmost, add, remove, to.

We measured the quality of the final system (i.e., how successfully the computer learnt to map language to instructions) as well as the cognitive load on participants. We found, unsurprisingly, that in the non-restricted setting people used a much wider variety of words, and much longer sentences. However, the restricted language participants seemed to be able to train their systems more effectively. Participants in the restricted language setting also reported needing less effort, and perceived their performance to be higher.

examples
The figure above illustrates gameplay. A: Game with start and goal states and 2 intermediate states. B: The player issues a language command. “use only…” message appears only for players in restricted condition. C: The player scrolls through candidate configurations until she finds the one matching the meaning of the command. The correct interpretation (bottom) solves the puzzle.

By imposing restrictions, we achieved the same or better system performance, without detriment to the user experience – indeed, participants reported lower effort and higher performance. We think that a guided, consistent language helps users understand the limitations of a system. That’s not to say we’ll never desire a system that understands arbitrary human language. But given the current capabilities of AI systems, we will see diminishing returns in user experience and performance by attempting to accommodate arbitrary natural language input. Rather than considering one of two extremes – a specialised graphical user interface vs a completely natural language interface, designers should consider restricted language interfaces which trade-off full expressiveness for simplicity, learnability and consistency.

Here’s a summary in the form of a poem:

It was not meant to be this way
You cannot understand

This human dance of veiled intent
The spoken word and written hand

But let us meet at halfway point
And share our thoughts with less

To know each other’s will and wish
— not guess

Want to learn more about our study? Read it here (click to download PDF) or see the publication details below:

Mu, Jesse, and Advait Sarkar. “Do We Need Natural Language?: Exploring Restricted Language Interfaces for Complex Domains.” In Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems, p. LBW2822. ACM, 2019. https://dl.acm.org/citation.cfm?doid=3290607.3312975

Talking to a bot might help with depression, but you won’t enjoy the conversation

Mental illness is a significant contributor to the global health burden. Cognitive Behavioural Therapy (CBT) provided by a trained therapist is effective. But CBT is not an option for many people who cannot travel long distances, or take the time away from work, or simply cannot afford to visit a therapist.

To provide more scalable and accessible treatment, we could use Artificial Intelligence-driven chatbots to provide a therapy session. It might not (currently) be as effective as a human therapist, but it is likely to be better than no treatment at all. At least one study of a chatbot therapist has shown limited but positive clinical outcomes.

My student Samuel Bell and I were interested in finding out whether chatbot-based therapy could be effective not just clinically, but also in terms of how patients felt during the sessions. Clinical efficacy is only one marker of a good therapy session. Others include sharing ease (i.e., does the patient feel able to confide in the therapist), smoothness of conversation, perceived usefulness, and enjoyment.

To find out, we conducted a study. Ten participants with sub-clinical stress symptoms took part in two 30-minute therapy sessions. Five participants had their sessions with a human therapist, conducted via chat through an internet-based CBT interface. The other five had therapy sessions with a simulated chatbot, through the same interface. At the end of the study, all participants completed a questionnaire about their experience.

We found that in terms of sharing ease and perceived usefulness, neither the human nor the simulated chatbot emerged the clear winner, although participants’ remarks suggested that they found the chatbot less useful. In terms of smoothness of conversation and enjoyment, the chatbot was clearly worse.

Participants felt that the chatbot had a poor ability to “read between the lines”, and they felt that their comments were often ignored. One participant explained their dissatisfaction:

“It was a repetition of what I said, not an expansion of what I said.”

Another participant commented on the lack of shared experience:

“When you tell something to someone, it’s better, because they might have gone through something similar… there’s no sense that the robot cares or understands or empathises.”

Our study has a small sample size, but nonetheless points to clear deficiencies in chatbot-based therapy. We suggest that future research into chatbot CBT acknowledges and explores these areas of conversational recall, empathy, and the challenge of shared experience, in the hope that we may benefit from scalable, accessible therapy where needed.

Want to learn more about our study? Read it here (PDF) or see the publication details below:

Bell, Samuel, Clara Wood, and Advait Sarkar. “Perceptions of Chatbots in Therapy.” In Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems, p. LBW1712. ACM, 2019. https://dl.acm.org/citation.cfm?id=3313072

Research through design, and the role of theory in Human-Computer Interaction

collaborate_collaboration_creative_design_designer_group_groupware_hands-911060.jpg!d

Every day, designers create the world around us: every website you’ve visited, book or magazine you’ve read, every app you’ve used on your phone, every chair you’ve sat on, almost everything around you has been consciously designed by someone.

Since this activity is so important, an area of academia known as design research is concerned with studying how design works, and how to do it better. The ultimate aim for design research is to create a theory for designing a particular thing (such as websites, or books, or apps, or chairs) that teaches us how to design that thing well. One way to try and produce these theories is to actually design a bunch of things (websites or books or apps or chairs), and then document what worked and what didn’t. This is the basic idea behind research through design. In the rest of this post, I’ll explain a bit more about research through design, as well as what we can realistically expect from the theories we produce using this process.

What’s research through design?
Design shifts the world from its current state into a “preferred” state through the production of a designed artefact. My PhD dissertation described the design of two visual analytics tools I developed, with a focus on documenting and theorising those aspects of the design that (a) facilitate the specific user tasks I identified as being important and (b) reduced expertise requirements for users. Thus, the approach to knowledge production was research through design (Frayling, 1993). The distinction between research through design, and merely design, is one of intent. In the former, design is practiced with the primary intent of producing knowledge for the community of academics and practitioners. Consequently, the design artefact cannot stand in isolation – it must be accompanied by some form of discourse intended to communicate the embodied knowledge result to the community. Moreover, this discourse must make explicit how the artefact is sufficiently novel to contribute to knowledge. In a non-research design activity, neither annotative discourse nor novelty is necessary for success.

Zimmerman et al. (2007) propose four general criteria for evaluating research through design contributions: process, invention, relevance, and extensibility. Process refers to the rigour and rationale of the methods applied to produce the design artefact, and Invention refers to the degree of academic novelty. Relevance refers to the ability of the contribution to have a wider impact. Extensibility is the ability of the knowledge as documented to be built upon by future research.

Why is there no such thing as a complete theory of design?
When designing systems in some domain, it may seem an attractive proposition to seek a theory of design that not only characterises specifically the nature of these systems and how their important properties may be measured, but also prescribes a straightforward, deterministic strategy for the design of such systems. When I started out in my research, I wanted to produce a theory for how to design systems that would let non-experts use visual tools to perform statistics and machine learning. I initially anticipated that such a prescriptive theory would be elusive for multiple reasons, including the nascency of interactive machine learning, the incomplete characterisation of potential applications, and a wariness of the challenges surrounding “implications for design” (Stolterman, 2008).

Towards the end of my PhD, I came to the position (and I still hold it) that a complete design theory is not only elusive, but impossible – not just for visual analytics tools, but any design domain. This is because theory underspecifies design, and design underspecifies theory (Gaver, 2012). Theory underspecifies design because a successful design activity must culminate as an ultimate particular (Stolterman, 2008): an instantiated, designed artefact, subject to innumerable decisions, situated in a particular context, and limited by time and resource constraints. Design problems are inherently wicked problems (Buchanan, 1992); they can never be formulated to a level of precision which affords ‘solving’ through formal methods, and no theory for design can profess to provide a recommendation for every design decision. Conversely, design underspecifies theory, in the sense that an ultimate particular will fail to exemplify some, or even many, of the nuances captured in an articulated theory.

This is not to say that we should do away with theory altogether and focus solely on artefacts themselves. Gaver’s view, to which I am sympathetic, is that design theory is “provisional, contingent, and aspirational”. The aim of design theory is to capture and communicate knowledge generated during the design process, in the belief that it may sometimes, but not always, lead to successful designs in the future.

This post is heavily based on an excerpt from my 2016 PhD thesis:
Interactive analytical modelling, Advait Sarkar, PhD, University of Cambridge, 2016

References

Frayling, Christopher. Research in art and design. Royal College of Art, London, 1993.

Zimmerman, John; Forlizzi, Jodi, and Evenson, Shelley. Research through design as a method for interaction design research in HCI. In Proceedings of the SIGCHI conference on Human factors in computing systems, pages 493–502. ACM, 2007.

Stolterman, Erik. The nature of design practice and implications for interaction design research. International Journal of Design, 2(1), 2008.

Gaver, William. What should we expect from research through design? In Proceedings of the SIGCHI conference on human factors in computing systems, pages 937–946. ACM, 2012.

Buchanan, Richard. Wicked problems in design thinking. Design issues, 8(2):5–21, 1992.

Setwise Comparison: a faster, more consistent way to make judgements

I originally wrote this post in 2016 for the Sparrho blog.

Have you ever wondered whether doctors are consistent in their judgements? In some cases, they really aren’t. When asked to rate videos of patients with multiple sclerosis (a disease that causes impaired movement) on a numeric scale from 0 being completely healthy to 4 being severely impaired, clinicians struggled to be consistent, often giving the same patient different scores at different times, and disagreeing amongst themselves. This difficulty is quite common, and not unique to doctors — people often have to assign scores to difficult, abstract concepts, such as “How good was a musical performance?” or “How much do you agree or disagree with this statement?” Time and time again, it has been shown through research that people are fundamentally inconsistent at this type of activity, no matter the setting or level of expertise.

The field of ‘machine learning’, which can help to automate such scoring (e.g. automatically rating patients according to their disability), is based on the method that we can give the computer a set of examples for which the score is known, in the hope that the computer can use these to ‘learn’ how to assign scores to new, unseen examples. But if the computer is taught from examples where the score is inconsistently assigned, the result is that the computer learns to assign inconsistent, unusable scores to new, unseen examples.

To solve this problem, we brought together an understanding of how humans work with some mathematical tricks. The fundamental insight is that it is easier and more consistent for humans to provide preference judgements (e.g. “is this higher/lower/equal to that?”) as opposed to absolute value judgements (e.g. “is this a 4 or a 5?”). The problem is, even if you have as few as 50 items to assign scores, you already have 50 x 49 = 2450 ways of pairing them together. This balloons to nearly 10,000 comparisons when you have 100 items. Clearly, this doesn’t scale. So we scale this using a mathematical insight: namely, that if you’ve compared A to B, and B to C, you can guess with reasonably high accuracy what the relationship is between A and C. This ‘guessing’ is done with a computer algorithm called TrueSkill, which was originally invented to help rank people playing multiplayer games by their skill, so that they could be better matched to online opponents. Using TrueSkill, we can reduce the number of comparisons required by a significant amount, so that increasing the number of items no longer results in a huge increase in comparisons. This study has advanced our understanding of how people quantify difficult concepts, and has presented a new method which balances the strengths of people and computers to help people efficiently and consistently provide scores to many items.

Why is this important for researchers in fields other than computer vision?

This study shows a new way to quickly and consistently have humans rate items on a continuous scale (e.g. “rate the happiness of the individual in this picture on a scale of 1 to 5”). It works through the use of preference judgements (e.g. “is this higher/lower/equal to that?”) as opposed to absolute value judgements (e.g. “is this a 4 or a 5?”), combined with an algorithmic ranking system which can reduce the need to compare every item with every other item. This was initially motivated by the need to have higher-quality labels for machine learning systems, but can be applied in any domain where humans have difficulty placing items along a scale. In our study we showed that clinicians can use our method to achieve far higher consistency than was previously thought possible in their assessment of motor illness.

We built a nifty tool to help clinicians perform Setwise Comparison, which you can see in the video below: https://www.youtube.com/watch?v=Q1hW-UXU3YE

Why is this important for researchers in the same field?

This study describes a novel method for efficiently eliciting high-consistency continuous labels, which can be used as training data for machine learning systems, when the concept being labelled has unclear boundaries — a common scenario in several machine learning domains, such as affect recognition, automated sports coaching, and automated disease assessment. Label consistency is improved through the use of preference judgements, that is, labellers sort training data on a continuum, rather than providing absolute value judgements. Efficiency is improved through the use of comparison in sets (as opposed to pairwise comparison), and leveraging probabilistic inference through the TrueSkill algorithm to infer the relationship between data which have not explicitly been compared. The system was evaluated on the real-world case study of clinicians assessing motor degeneration in multiple sclerosis (MS) patients, and was shown to have an unprecedented level of consistency, exceeding widely-accepted clinical ‘gold standards’.

To learn more

If you’re interested in learning more, we reported this research in detail in the following publications:

Setwise Comparison: Consistent, Scalable, Continuum Labels for Machine Learning
Advait Sarkar, Cecily Morrison, Jonas F. Dorn, Rishi Bedi, Saskia Steinheimer, Jacques Boisvert, Jessica Burggraaff, Marcus D’Souza, Peter Kontschieder, Samuel Rota Bulò, Lorcan Walsh, Christian P. Kamm, Yordan Zaykov, Abigail Sellen, Siân E. Lindley
Proceedings of the 34th Annual ACM Conference on Human Factors in Computing Systems (CHI 2016) (pp. 261–271)

Setwise comparison: efficient fine-grained rating of movement videos using algorithmic support – a proof of concept study
Saskia Steinheimer, Jonas F. Dorn, Cecily Morrison, Advait Sarkar, Marcus D’Souza, Jacques Boisvert, Rishi Bedi, Jessica Burggraaff, Peter Kontschieder, Frank Dahlke, Abigail Sellen, Bernard M. J. Uitdehaag, Ludwig Kappos, Christian P. Kamm
Disability and Rehabilitation, 2019
(This was a writeup of our 2016 CHI paper for a medical audience)

The 4 Drive Backup Solution for Mere Mortals

In this post I describe a minimal, yet comprehensive personal backup solution. It is relatively easy to implement, using only the built-in features of your operating system, and is quite cheap as it requires only 4 hard drives (and can be accomplished with even fewer). Despite being extremely simple, it has the characteristics of a complete backup system and protects against several causes of data loss. It is a sensible backup strategy as of June 2014. This post is aimed towards the technologically-inclined reader.

The solution
  • Preparation: Acquire 4 external hard drives, each as large as you wish, all of roughly the same capacity. I will refer to them as A1, A2, I1 and I2.
  • Archival drives: Drives A1 and A2 are archival drives. They contain data that you no longer keep on your primary computer, and data that you no longer expect to change. This might include photos, music, and old work. You must ensure that A1 and A2 always have the same content as each other.
  • Incremental backup drives: Drives I1 and I2 are incremental backup drives. They will contain a versioned history of all the files on your primary computer. For instance, you can set them both to be Time Machine drives. Time Machine is the incremental/differential backup software that comes standard with Mac OS X (alternative solutions are available for other operating systems).
  • Location: Drives A1 and I1 are stored at the same primary location, such as your home. Drives A2 and I2 are stored a different, secondary location, such as your workplace.
  • What you need to do: Update the content on A1 and A2 at your convenience, making sure they are always in sync. Make incremental backups with I1 and I2 as frequently as possible (at least once daily). With Time Machine this amounts to merely plugging in the drive (or connecting to the same network as the drive, if you use Time Capsule, or you can use something like a Transporter).

And that’s it.

What this scheme protects you against
  • Under the event of data loss due to a hardware or software failure, that is, if one of the drives fails or the data on one of the drives gets corrupted, there is always another drive with a copy of the same data. This drive may be used until the failed/corrupt drive is replaced.
  • Under the event of data loss due to human error, such as accidentally deleting or overwriting a file, there are two incremental backups from which any historic version of the file can be restored.
  • Under the event of data loss due to natural disasters (such as a fire, power surge, or flood) or theft, which causes the drives in one location to be destroyed or stolen, there is always a duplicate of the drives in another location which may be used until the destroyed/stolen drives are replaced. This is what is known as an offsite backup.
What this scheme doesn’t protect you against
  • Both archival drives or both incremental backup drives failing simultaneously: this is extremely unlikely, but if you’re worried about it you can add a third drive of each type.
  • Failure to make incremental/archival backups often enough: this is your problem, not a problem with the scheme.
Modifying the scheme if it doesn’t work for you

This scheme can be directly implemented if:

  1. You primarily use one computer, which is a Mac
  2. Your day-to-day work does not create huge (i.e. comparable to the size of your hard drive), constantly changing files
  3. You do not care for third party services or cloud services (which often require recurring monthly fees)
  4. You are somewhat conscious of but not too restricted by price
  5. You are okay with waiting a few hours to get going again from your backups in case the hard drive in your computer fails and you can no longer boot

If the above do not apply to you, it is easy to adapt this solution for other use cases. For instance, you can easily modify the solution if:

  1. You use Windows/Linux: I believe Windows has an equivalent to Time Machine called “Windows Backup“. Linux users can probably fend for themselves and find something that works for them.
  2. You primarily use multiple computers: You will need an additional pair of incremental backup drives for each additional computer you use.
  3. You need to be able to immediately continue from where you left off in case your computer stops working: You will need to start creating bootable clones, which can be achieved using software such as Disk Utility (comes standard with Mac OS X), SuperDuper or Carbon Copy Cloner. For Windows users, Windows Backup can also create bootable clones. These can be stored on additional drives or on your archival drives.
  4. You don’t mind third party or cloud services: I recommend looking into a solution such as Crashplan or BackBlaze. You can use these services to augment the 4 drive solution or to replace it entirely, depending on your level of trust and the quality of your Internet connection.
  5. You are extremely price conscious: It is possible to implement this scheme with only two drives. In this scenario you will have to create two partitions on each drive, one for archival and the other for the incremental backup. The drives must of course still be stored at separate locations. I personally prefer the 4 drive version because (1) hard drives are not yet capacious enough that cheap commodity drives can be partitioned into useful sizes for those with lots of data, (2) partitioning necessitates erasing the drive, (3) I am leery of increased opportunities for filesystem corruption with multiple partitions, and (4) it is much less effort to replace drives if they only serve a single purpose.
Choosing a mix of drives

Since you will be acquiring multiple drives, you have the opportunity to spread your risk even further. By buying drives from different brands, you reduce your vulnerability if any single manufacturer or hard drive model has a faulty run. It is also good to have a mix of hard drive ages, since very young as well as very old drives appear to have a higher failure rate than those between the ages of 1 and 3 years.

I hope this is of some use. I was tired of thinking about backups and tired of researching third party backup solutions, so I settled on this compact, no-frills setup that can cope with all major threats to your data. If you have a suggestion or notice a deficiency, please leave a comment!