The field of aging research, I’d argue, is a good place to look for technological solutions to the biggest humanitarian problems.
But why would you want that in the first place?
We’re humans, right? So we’re motivated, more or less, by human values. Helping people flourish and avoid needless suffering. Looking at “what humanitarian problems can I help with?” is a natural thing to ask when you’re considering what to do with your life.
If you care about impact, you’d like to make as big a dent in humanitarian problems as possible. So you want to work on problems which:
Affect lots of people
Are severely harmful
Are easy to improve
Why might you want to look for technological or scientific solutions, specifically? Because those are the easy ones to implement. Political solutions are hard. Coordinating lots of people to cooperate on a problem is important work, but it’s usually very slow and very collaborative. If you’re lucky and clever, you won’t have to do that; a new innovation, developed by a small group, can very quickly make a substantial dent in the problem.
For instance, ending global poverty is a difficult political problem. But cheap, dramatically effective interventions for infectious diseases -- things like mosquito nets, oral rehydration therapy, and vaccines -- can save a lot of lives, and cut down on at least one bottleneck (health) that keeps the world’s poorest from flourishing. These are “technological” interventions not in the sense that they’re especially new or complex technology (in fact, they’re extremely low-tech), but in the sense that they solve natural, as opposed to social, problems efficiently. You’re minimizing the number of people whose buy-in you need to get.
Coordination between people is a schlep, a tedious, unpleasant task. It’s often unavoidable! It’s a common pitfall among tech people to only work on easy, frictionless technical problems, and be so unwilling to deal with any “political” work that they never tackle the important problems. Getting anything important done obviously requires getting some buy-in from people. But, equally obviously, getting anything important done requires saving yourself tedious work by being clever, whenever that’s actually possible.
“Save tedious work by being clever” is the engineer’s mindset. But some people oppose that perspective as “techno-optimism” and think it’s naive or even immoral.
Why might that be?
The very thing that makes technological solutions efficient -- you don’t need to get buy-in from many people -- also makes them troubling to many.
The notion of consensus is a very compelling one. If someone can make a big change in our lives without even consulting us, that’s disturbing. If something is going to affect us, we’d like to be part of the decision process. If the people making decisions are a small, secretive group, and we can’t see what their decision process looks like, we get nervous. Even if they say they only want to make the world a better place.
People want oversight, they want democracy, they want voice. For most of human prehistory, people lived in tribes governed by collective deliberation; everyone sat down in a group and worked out what to do together. And that’s still what feels right to most of us. Technology offers the opposite of that ancient governance by consensus: it can offer one person the ability to go “Surprise! Everything in your life is different now!” without asking your permission.
Technology doesn’t scare people because they’re ignorant. Technology scares people because all power is inherently and genuinely threatening when you’re not the one wielding it. Scientia potentia est isn’t a pro-knowledge platitude. Knowledge is power, as in power relations. Knowledge makes you dangerous. Even if you are confident your motives are benevolent, that doesn’t mean people will, or should, automatically trust you.
This is a tension inherent at the heart of any kind of ambition. You want to make a positive difference in the world! So you try to be effective and powerful at making a difference! And that means you look like a threat!
But in order to not look like a threat, you’d have to not make a difference. And meanwhile, the default human condition is...not good.
There are ways to mitigate abuses of power: transparency, objectivity, decentralization, choice. You can say “here’s what we’re doing and how it works”, (transparency) “here’s how it performs according to an unbiased test”, (objectivity) “here’s how you can build your own version independent of us” (decentralization), and “if you don’t like it, you don’t have to use it” (choice). Open-source software is a great example of all these values. You don’t have to blindly trust the people who produced it. Skeptics -- at least, constructive ones -- make open-source work better. The same is true for scientific research. The inventors of an openly available tool don’t have to ask your permission before acting, but neither do you have to ask their permission before using or modifying their knowledge.
On the other hand, the “power relations” issues still apply even to open technologies, to a lesser extent. People who understand and take initiative in using even a free tool are going to be advantaged relative to those who don’t. That’s a bullet we just have to bite.
So, what does this mean for you, as a person who wants to make the world better? As an “impatient optimist”, as the people at the Gates Foundation describe themselves?
You do want to seek powerful and efficient technological solutions, where those are possible, while owning the fact that whatever makes you effective will also make you unpopular among some people. And you also want to minimize the extent that you, personally, are the bottleneck to making the world a better place, since people justifiably mistrust unilateral power. Since you’re “impatient”, you’re not going to be doing a ton of collective deliberation, so instead you can make tools public, free, and optional, whenever possible. Openness enables cooperation without coordination, allowing people to make parallel contributions without necessarily having to spend lots of resources on communicating.
Jonas Salk created the polio vaccine and chose not to patent it; that’s a possible model here, of creating great power and then giving it away. He asked, “Could you patent the sun?”
If you actually want to radically improve the world, it can help to focus on demonstrating that you can make something that works and that you’re trustworthy, rather than immediately seeking to capture the most value. People aren’t used to radical change, and part of helping large numbers of people is gaining their trust. The best counter to techno-skepticism is extreme abundance -- making things that are really good, really cheap (or free), and really obviously better than what came before.