Logo for The Department of Better Technology.

Rewiring Government

The Department of Better Technology helps governments deliver great digital services to the people who depend on them.

Interview: David Robinson on ethics and civil rights in a big data world

Screendoor logo

Streamline the process behind your online forms.

Learn about Screendoor →

Quote from David Robinson, excerpted from the transcript below.

In this episode of Rewiring Government, CEO Joshua Goldstein talks to David Robinson, principal at Upturn, about civil rights in the digital age. They cover big data, the ethics ruling company research labs, and ways to hold algorithms accountable, particularly when it comes to poor, vulnerable, or otherwise disadvantaged people.

Use the player above to listen, or subscribe on iTunes and Google Play! You can also add our RSS feed to your favorite podcast app. If you like this episode, rate and review us on iTunes, and tell your friends.

A transcript of the interview is below, edited for content and flow.


Josh: David Robinson, welcome to the Rewiring Government podcast.

David: Glad to be here.

Josh: You’ve had a really neat career at the overlap of technology and policy. Where did that begin for you? What was the motivation and what led you to your current work? 

David: I think if we were going to look all the way back, we’d have to rewind the tape to 1981.

I was born in September, right around the same time as the first IBM PC came out. I have a mild case of cerebral palsy, a motor impairment, so my handwriting isn’t very good. (The family joke was that I ought to become a doctor like my dad.) But the bottom line was that around grade school, I was really in a position to benefit greatly from technology.

When you’re in elementary school, whether you’re a good writer or not is really about penmanship and whether people can read what you’ve scrawled down, which in my case, many couldn’t.

When I was in about fourth or fifth grade, I got a word processor that I could use in school, and it was transformative. It was so empowering, and, as it turns out, I love writing. The idea was to compensate for the fact that my handwriting wasn’t very good, but, of course, it did so much more than that. I could change my mind and move text around, and [my writing could] be typed and printed out.

Just compare—socially, subtly—something handwritten from an elementary school kid versus something that came out of a printer. The latter, which comes from a computer, just has so much authority to it and [brings with it] all these secondary social things. So, for me, it’s always been the case that technology is tremendously liberating and has tremendous positive potential.

The other lesson that I took from my early experience was that the technology had already been there for years. Word processors weren’t new when I first got one in school. What changed was the rules. It became possible to have one in school because someone said, “We should use this tool in this way, and it could be helpful.”

To me, that’s always been part of the story of technology. It’s not only about what’s in the lab, but how we get it into the field and how we get to the field in a way that works for people.

That’s a lot of what civic tech is about, and I think it’s a lot of what Upturn’s work is about.

Josh: Tell me about Upturn. You started the firm, and it’s now… two years old?

David: We turned five in August [2016].

Josh: Wow. I have a misshapen sense of time. So, tell me about Upturn and what motivated you and you partner, Harlan Yu, to start the firm.

David: I had been a student of Ed Felten’s as an undergrad at Princeton’s Center for Information Technology Policy. Ed was also Harlan’s thesis advisor and was actually the first staff hire for the Center.

The Center was going to combine computer science work with public policy, like looking at voting machines and whether they’re secure or not. Then, as it turned out, we all ended up bridging over into open data, which really was a launch pad.

One of the first papers that Harlan and I worked on together was called “Government Data and the Invisible Hand.” It said some things that now sound like old hat but were very new in 2009, like the idea that data released publicly by a government office really ought to be in an open and machine readable format, not just a PDF that’s designed for human inspection.

I think [that paper] was partially about having people understand the technological potential of machine readable and reusable data. But together, we were able to frame the issue in a way that really worked for a lot of people.

I mean the idea of transparency is traditionally a left-of-center political take on these questions. But, of course, right up there in the title of the paper [was] the Invisible Hand idea that government can’t have a monopoly on presenting information; lots of parties [need] to be able to do that. That’s more of a libertarian (at least in a small ‘l’ way) way of looking at the same set of facts.

That’s always been something that Harlan and I have enjoyed doing together—to think about ways of framing issues that are going to really work well in a political and policy context and are also going to match the facts about the technology.

When this paper came out, it was widely read. People started to come to us and ask for help implementing some of the ideas.

Our first consulting client was the government of Colombia, which was working on its open government plan under the Multilateral Open Government Partnership. [They were seeking advice] about how to create a national portal and things like that.

Josh: Talk about the breadth of issues that you’re working on now. Technology has crept into a wide range of public policy issues. It’s probably always been there, but it’s more explicitly a tool now. Give a sense of the full range of things that you guys work on or are interested in.

David: We have a really broad spectrum of a practice.

We work a bit on Internet freedom. We’re working with a coalition of computer scientists and engineers funded by the State Department that’s looking at a new approach to get around censorship barriers in places like China (where they have the Great Firewall of China). So, we’re we’re working on a tool that would allow a network operator outside of a censored environment.

You might imagine a group of U.S. universities saying, for example, “If you can reach any part of our network, then you can reach the rest of the Internet.”

If someone want to cut off anything, they have to cut off all the universities. That’s the price; it’s kind of a package deal. Part of that comes out of some earlier work we did on collateral freedom, where we looked at what really inspires censoring states to let some things through. The answer is that there is an economic benefit [to some traffic]. This is a new technology that tries to tie freedom traffic to that economic benefit.

In other areas of Upturn’s work, we’re working on special legislative drafting software for the House of Representatives (basically some XML and extra metadata on top of a word processing tool). They’ll know which section and which part of a bill is being worked on, and they can programmatically apply an amendment to a bill and see what the red line would be.

We [also] do some work on national security with the Brennan Center at NYU. They work on surveillance reform issues, and in the wake of the Snowden disclosures, there are a lot of technical questions that have come up.

For example, there’s one rule book for communications that are collected or gathered inside the United States, and then there’s a much more permissive set of rules if the interception of the communication happens abroad. So, if you or I were sitting in Washington emailing each other, how much of that might transit outside the United States? That turns into an engineering or technical question.

Apart from that, we do a range of work for foundations, like the Hewlett Foundation and the MacArthur Foundation, in planning [their investments]. There, we’re looking at cyber security issues broadly.

But the work at the center of our thinking is our civil rights work, in partnership with civil rights organizations. That looks at traditional civil rights issues—things like criminal justice reform or financial inclusion. In other words, we’re making sure that people have access to financing to live their lives in a productive way and are not being unduly or harshly treated by the police.

Those goals, of course, existed long before computers, but our pitch to an expert on civil rights or criminal justice or lending is this: “You know the issues, but technology is changing what’s possible.”

We used to work a lot in the financial justice world, making sure that a human being who was deciding whether you would get a loan wasn’t biased.

We would talk about animus, this idea of an irrational bad feeling that a person might have, let’s say because of a racist impulse or [ill-will] toward a racial group. You can check whether they’re going out of their way to deny loans to a minority group.

But in an algorithm-driven, data-focused, automated world, there’s a real problem with that—the computer doesn’t have animus. The computer doesn’t have psychology at all.

People have this really strong sense, which is sometimes accurate, that the computer will be fair. But the reality is that just because something is numerical or automatic, doesn’t mean that it’s free from bias. Often these tools are going to reproduce bias that’s in the data that their inputs are drawing from.

Josh: I wanted to dig into this area of civil rights in an era of big data. It’s something I’ve followed very closely and learned a lot from both you guys [at Upturn] and folks at the Center for Information Technology Policy.

There are some ways in which civil rights issues are similar in this era, but in other ways, they now require a new perspective.

I think back to the practice of redlining in the 1970s and trying to catch those who wanted to exclude certain people from neighborhoods. It strikes me that, in an algorithmic world, that’s not really what you’re looking for.

In what ways do you see that these issues require similar tactics from a pre-big data era, and in what ways do you think they require new tactics?

David: One way we can think about it is that the strategy is the same, but maybe the tactics are different. Whether we have a human or a computer making a decision, a lot of [the work of] civil rights is going to be about helping people notice when something is structurally off kilter in a way that is adverse to some group, typically a minority group or disadvantaged group.

[If] we have people being promoted or hired into management positions, sometimes that means that we need to help people notice that they may think of managers as a male role because historically the people doing it have been men. So, perhaps we need to proactively train people to rethink, or maybe rewire a little bit, the secondary traits, the cultural traits of a good manager. Maybe there’s actually more room there for variety in how to thrive as a manager than people are accustomed to encountering.

But, take that into a data driven world. If you throw all the features of your data at an algorithm and it can measure anything that you have on record about who makes a good manager, you’re likely to find that middle-aged white guys who like to play golf made great managers, and so those are historically excellent predictors of future management performance.

Here, you need to look at the data and ask who else is missing or what facts about the world are encoded in this data that aren’t purely about the thing you’re trying to measure. I mean if you’re trying to pick the best managers, then the fact that some groups have been excluded in the past, that’s noise for you, right?

You want to know not just who had whatever kind of lucky demography or other advantage that might have gotten them through some kind of eye of the needle, of some kind of unfairness. You want to know who is actually going to thrive now and help your organization thrive, and that’s not the same question. It’s not the same question because, historically, not everyone has had a chance to apply their skills to help an organization thrive.

If you just look at the data, that’s really a record of that history—in part of people doing great work, but also in part of other people being excluded.

If you look at just that history, you’re [not only] going to miss opportunities that were missed in the past, but you’re really going to miss opportunities to find the best candidates now.

And it’s not because the engineer has a racist tendency or the business person who’s employing the technology or the government leader and in an office. It’s just because if we’re not proactive, if we’re not vigilant, in how we apply data, then we’re going to reproduce bad patterns.

There’s a set of skills now around pushing back when people make very generous assumptions about how well a data driven system is going to work. We need to empower people to ask some real questions about the details of the data and what it really means. And that’s a new skill that I think we’re all developing and we’re all helping each other to develop.

Josh: What’s your take on what that skill set looks like now? [I ask that] because I think implied in the idea of Big Data is what you described—modeling or fitting a predictive model to existing data, whatever contours it takes. That gets to the problem that you described: If the historical data is not useful in seeing who’s a good manager, then maybe it shouldn’t be used exclusively.

What are those tools out of a technical, computer science toolbox, or maybe even out of a social science toolbox, that you think might be promising in helping folks identify when that has happened?

David: When you talk to people who wrangle data for a living, when you talk to real, practicing data scientists, you’ll get a much more subtle picture than comes out of the pages of Wired magazine. You’ll get a real sense that we struggle with data quality issues. We struggle with [identifying] the signal and the bias in this data. We struggle to calibrate this information with data from somewhere else.

There is a growing set of people who know to ask those questions, and I think for the rest of the world, what’s really needed is a cultural norm of expecting that there will be a discussion about the limits of data and [its] potential problems. There needs to be a well informed critique. That doesn’t mean that each and every one of us needs to be skilled methodological critic, but it does mean that we need access to that kind of expertise.

We talked about hiring. Another good example is in the predictive policing area of criminal justice. People will say that these systems predict where crime might happen in the future, or at least that’s how they’re billed. And so, the police will go to a [certain] place and focus on certain people based on what this computer tells them.

What you’ll hear is that these systems are based on historic crime statistics, right? But, what they really, typically mean is that the system is based on historic crime reports or on arrest statistics.

This may sound sort of tautological, but it’s worth stopping to think about. It’s a record of what the police detected, which partly tells you what may have been going on on the ground in a certain spot, but it also partly tells you what police are focusing on. It’s partly a record of where they chose to look, especially with certain kinds of crimes, where we know that enforcement is very different in different neighborhoods.

Let’s take the case of drug possession. We have public health data down to the census-tract level that tells us that drug use and also drug sales are roughly uniform across geographies and demographic groups. But the arrests for drug possession and for drug dealing are dramatically concentrated in communities of color.

There are a whole bunch of reasons why the data they have is different in those areas, and among the reasons are that police have a historic tendency to focus on certain neighborhood and that police have more freedom to search public housing.

So yes, it’s good that we have the data that we have in a police department, and, yes, the data has some value. But, I think it’s not just a yes or no. It’s not just [whether] this data is good and we should use it uncritically, or [it’s] bad and we should not use that at all. To the contrary, I think lots of different data sources have something to add.

So, in the case of looking at crime, you might look at police data, but you might also look at public health data. You might look at admissions for overdoses or for gunshot wounds to get a sense from outside of the system you’re trying to direct.

You predict that a policeman should go somewhere because there’s a crime likely to happen there, but then, you learn more about what happens in the spot where you predicted stuff was going to happen. You need measurement outside of the system in order to really calibrate.

Josh: That’s such a crucial point, and I think that’s something that is fundamentally different or possible in an era of multiple data sources.

When I look at something like the White House report on big data and civil rights, the unit of analysis is often a particular area, like criminal justice or credit scores. Many people think there’s an implication there that you can nail a single source of data as being biased or discriminatory. But I think what you’re saying is that it’s really about this broader picture, looking at different sources of data together to get something of a better ground truth.

And that strikes me as something that is more and more important to do. I think about things like Airbnb; recently, there was a study looking at whether there is bias baked into who gets accepted to an Airbnb.

From a technical perspective, that also seems like a challenge, in terms of looking at those multiple sources together. This gets to what some folks call “data commons” or a set of shared data sources.

Do you see a future in which we’re syncing multiple data sources more and more? And how can that be leveraged for some of the goals we’re talking about?

David: I think it’s really early days, and I would say every data set has some bias in it.

Compared to what you’re trying to measure, any time you’re measuring something or recording something, there was some original context there for why those measurements were gathered.

For example, one thing we’re seeing is that data brokers have commercial data that was originally designed to help target advertisements. If it had even a little bit of signal in it about who might want to buy a lawnmower, then it was a viable product. Now, we’re using that data to do much more personalized, much more targeted stuff, where sometimes the stakes are higher.

You’re not even going to learn about a credit card or some other favorable offer because it’s being hand-picked and targeted at a handful of people. In situations like that, it’s important to realize that the data has lower fidelity, since it was originally gathered in a much more rough and ready way for a much rougher purpose.

In terms of how to pull pieces together, I think we’re still figuring out what that looks like. Part of the battle, frankly, is to make people critical in the way they look at the data and its biases. What’s signal and what’s noise from their perspective, in the data that’s available to them?

Once we get to a baseline where that question gets asked every time we’re looking at an application of big data, I think we’re in a much better place to weave things together.

Powerful centralized institutions, like huge government offices, have most of the data. Within the government, intelligence agencies have greater facility and greater available data than any other part of the government.

Within [the private sector], there are a handful of companies. I was reading yesterday that if you look at the work on deep learning, which is one of the advanced big data techniques for mining and analyzing information, a rapidly growing fraction of those papers, academic papers coming out of that field, are co-authored by corporate employees in corporate R&D labs instead of in academic labs.

Part of the reason for that is that Google and Facebook and a handful of other firms are where the data is available. If you’re looking at big data, you need to be in a place where it’s available.

Uber bought out Carnegie Mellon’s lab on self-driving cars, which is a very data intensive thing; 40 out of their 130 researchers decamped en masse from an academic setting to this company.

Andrew Ng who founded Coursera and did the Coursera course on machine learning now works for Baidu in R&D.

One of the big things that is an open question in my mind is this: Right now, it’s powerful, central institutions that really can weave the data together. If we want to have the capacity to combine data [that can be analyzed] and be more broadly and democratically shared across society, that’s a different kind of challenge.

We need to build those skills and make that data available in different ways to more people. Otherwise, there’s a centralization of power that happens.

Josh: I have definitely experienced that from a research perspective, and if I were behind Facebook or one of these companies, I’d have much more insight into these really interesting questions.

There’s a world in which, in the future, some of that data might be more publicly shared, or at least shared with researchers who are responsible. There’s another world in which, when Facebook does their own experiments, they’re much more explicit about the ethical side of things. But then, there’s a third world that I hadn’t really thought about until I was in Europe recently.

The EU laws around access to personal data are actually quite different. Under EU law, you have the right to petition the Ubers of the world, ad trackers of the world, to get access to your own data.

Again, it’s still early days. When they get these requests, they don’t really know what to do with them yet because it’s still new. But there’s this other world that you could imagine where people can have access to their own data and compile that composite picture that we talked about on the unit level.

I think that’s an important future to consider. [It’s important to look at] discrimination around all these things that are legally important in the U.S.—race, gender, and these sorts of things.

But there’s also more arbitrary discrimination that we’d probably all find regardless, if we all had perfect access to our data. Maybe it’s an arrest warrant from before you were 16 that wasn’t expunged and is causing some problems.

I get the sense that having that holistic picture from an individual perspective is potentially promising as well.

David: I think I’m a little bit unsure about that approach, partly because [it would be difficult] for most people to obtain their data from different sources, even in a world in which you’re free to petition for it.

If you think about the number of providers that we’re sharing data with, I think it’s hard to pull that back together. Since each of us may have a different constellation of different sources that we’re pulling data from—like, if we were to get a complete archive of all the cloud providers who know different things about us—it’s going to be a different fingerprint of things.

Then, you’ve got somebody with all these different sources, and they want to weave them together. But unless they have a loom of their own or really are a data-wrangling individual, they’re going to rely on others for that.

A lot of the value we’ve seen has really been in apples-to-apples comparisons, where you’re looking at my Facebook feed versus your Facebook feed, or tweets from people in this environment versus tweets from people in that environment. I think that’s one of the reasons that these companies really have such a powerful role is because they have the same kind of data about a huge population of people.

How to get a more democratized view of that…? I’m not sure whether combining across sources is going to be feasible.

I wonder [about] even just getting your data from one source—let’s say, Google Takeout, where Google lets you pull your data—could there be a better set of tools, even just for Google data, that all users can pull out and see.

And then of course, once you start to realize that there are all these handy ways that you might visualize say your location history, you then think, “Well, if its really useful, why doesn’t Google just provide that for everyone?” And indeed they increasingly do.

[Take] the location history on [Google] Maps. They’ve had that data for a long time, but the creepiness threshold [and] the social norm has moved just enough that they can now show that to us, and we think it’s cool instead of creepy.

I think you’re going to see growth with that.

Josh: Do you see a world in which other companies follow that?

There’s a couple of core companies that allow some extraction in that way. And then you have others. Frank Pasquale’s great book, The Black Box Society, talks about a bunch of these other companies who were built around the idea that the way they target you is opaque.

Do you see a world—either through corporate norms, which I’m somewhat skeptical of, or legal tools—where that becomes more and more possible?

David: One of the things that becomes really important when we think about how companies behave is to really think about their incentives.

Look at a company like Google. They have this interesting blend where they want to target ads at you, but they don’t want their data about you to be too easily available. After all, that’s the thing they have that their competitors don’t, that they know about you and competitors don’t.

So, there are interesting things, like in privileging HTTPS or in privileging TLS connections on the web; that has all kinds of positive security benefits. It also reduces the amount of information that other sites receive when you arrive, about where you came from and things like that.

Part of their play is that they want you to be transparent to them, but not to others. And so, there’s this interesting back and forth.

[Let’s look at] Facebook. It’s got more than a billion users, and most American internet users will check it today. That’s a really powerful role.

When you see anxieties, like this political thing about whether they were suppressing conservative news, it’s because people know that it’s important what gets shown to them, [especially] which pieces of their network get shown to them on Facebook. I frankly have been impressed by the way that Facebook has handled that in the last few weeks.

I think that this discussion also shows us that people care and that people are not about to stop caring what Google does or what Facebook does. That central role that these handful of platforms have achieved comes with a level of scrutiny. (We could throw Microsoft in there and talk about the way that Windows 10 tracks your typing.)

I don’t know how much we can expect lots of companies to give us lots of data about ourselves. But I do think that the central players in this category need to have good public policy functions and be ready to have those discussions.

Josh: To this point of companies “buying” the researchers and running experiments, most of which we don’t have access to, the almost canonical example is Facebook’s study on whether they could make us happy or sad based on what we’re exposed to in our news feeds.

What is the responsibility of researchers who are sitting inside of these companies instead of in universities?

David: That’s a great question, to which I might add another question: “Who’s a researcher inside these companies?”

Many people are running A/B tests to figure out what works and what doesn’t. Some of the debate around the Facebook study and other big data studies [revolves around whether this testing should fall under the common rule]—that is, getting people’s consent and not harming them.

For example, James Grimmelmann has been out on the end of the spectrum, saying that this is human subjects research that should fall under the common rule, either federally or [on a state level]. It turns out that Maryland has a particularly exacting law that says lots of stuff should be covered by common rule.

Just looking at the ethics rules at the highest level, it’s hard to extrapolate what they’re actually going to require operationally. [For example,] when something falls under the human subjects regime [or] the common rule, there’s review by a bureaucracy that is outwardly charged with protecting ethics. (It’s often a liability and risk mitigation instrument for universities that are hosting research.)

In a big data world, there’s this interesting problem where you don’t really know how to think about a human subject. Were we all subjects of an experiment when Facebook tweaked its news feed? I mean Facebook tweaks its news feed all the time in reality.

If you’re not careful in imposing obligations on “research,” you can end up in a world in which the trigger for [calling] something research is “whether it produces generalizable knowledge” or “whether it’s published to advance public understanding.” Then, if you want to behave ethically, [you] never publish any research because that’s going to trigger these requirements.

In fact, notoriously, that’s a problem that has arisen in hospitals with best practices research, [asking questions] like “What are the right ways of encouraging surgeons to wash their hands every time they do certain procedures?” People were reprimanded for researching because they didn’t stop to ask each person on a gurney whether they were ‘okay’ being part of a study. But they were just trying to make things safer.

We need to have a thoughtful conversation about how to be ethical when we’re dealing with big data research and learning about people, whether in companies or in academe. I am skeptical about treating the human subjects regime for university-based research as if it were an ethical cure-all or a one-size-fits-all way of solving our ethical problems.

I was encouraged to see that Facebook recently started an ethical review group of undisclosed internal procedures and undisclosed membership. Obviously, it’s a delicate question for them—how much of that process to expose to the world—but it’s a good conversation to be having.

Josh: If folks wanted to learn more on this topic of civil rights in an era of big data, one of those fast moving topics, who should people look to? Who are people that you enjoy reading on this subject?

David: I’ll give three examples.

I’ll first make the self-serving observation that Equal Future, a newsletter that we publish at Upturn every week on Medium and on email, is about exactly this—civil rights and data and technology. So, that’s a resource for folks.

Other leaders are really doing great work in the field. I would point to danah boyd, who recently gave a great talk at the Personal Democracy Forum called “Be Careful What You Code For.”

I would say that Anil Dash has done great work, in the tech world in particular, in helping to sensitize folks to civil rights and surprising implications of technologies that they’re building or working on.

There’s also a whole world of civil rights blogging that’s not expressly technological. For our predominantly male, predominantly young, predominantly white and Asian tech community, one of the best things that we can do is to listen to other voices who come from other backgrounds.

I’ll give one other concrete example. Nicky Case, an interactive designer, made something called the Parable of the Polygons about how housing discrimination emerges from the reasonable preferences of each one of us. He’s done great work illustrating civil rights issues in a technologically accessible way.

Josh: I want to end on a question that relates to one of the reasons we started this podcast. There was this sense that you could put an artificial boundary around the era of tech policy issues from the beginning of the Obama administration to roughly now or a year from now. And you and I have been following these issues for a long time.

If you were to step back to the big picture, what would you say are some of the things that we’ve gotten right as a community or were right to focus on? And what are some of the things that, moving forward, we would need to approach differently?

David: Harlan and I did a paper a little while ago called “The New Ambiguity of ‘Open Government,’”. In it, we talked about a machine readable bus schedule for the public buses of Pyongyang that you can get on Google Maps.

That’s open data, which is great. But does that in any way suggest progress toward an open government in North Korea? No, it doesn’t.

We’ve oversold the Open Data movement to a degree, [with regards to] what would be achieved and how easily it would be achieved. There is now, I think, a very healthy reconsideration [happening].

I think about what Tom Steinberg and Josh Tauberer have been writing about how interesting the challenges turn out to be, how rich and human the challenges turned out to be, in making government better.

It used to be a 1.0, hackathon world of “We’re just going to solve this in a weekend.” I think, fundamentally, the goal of solving a problem instantaneously was always a little bit silly. But all of that work points toward a longer-term, more incremental, more real force for change that’s really about [giving] everyone in government a sense of technology’s potential and what it can achieve for them in the long run.

For example, there was an oversight hearing recently about 18F and the US Digital Service, which are two federal efforts to bring great, flexible, bureaucratically unencumbered technological work into federal government.

It’s broadly acknowledged now that this is a really important and valuable thing to do, and it’s not a partisan perspective. It’s a widespread perspective that we need to take the success of Silicon Valley and make that available to government at all levels.

To me, that sense of bringing technology to our problems in a collaborative and incremental way—ultimately, more than any overnight fix to any deep social problem—is the real legacy of the work we’ve been doing.

Josh: Well said. David, it’s been a real pleasure. If folks want to learn more about you and what Upturn’s doing, where should they go?

David: TeamUpturn.com tells about our work, and I’m @DGRobinson on Twitter. I’m constantly talking and listening and engaging on these issues, and I’m glad to hear from folks directly, if you’re listening and have thoughts about what we’re doing or what we ought to be doing.

Josh: It’s been great fun. Thanks again for joining us.

David: My pleasure.

Joshua Goldstein is the CEO of The Department of Better Technology.

Want more articles like this? Subscribe to our newsletter.