• Welcome to The Cave of Dragonflies forums, where the smallest bugs live alongside the strongest dragons.

    Guests are not able to post messages or even read certain areas of the forums. Now, that's boring, don't you think? Registration, on the other hand, is simple, completely free of charge, and does not require you to give out any personal information at all. As soon as you register, you can take part in some of the happy fun things at the forums such as posting messages, voting in polls, sending private messages to people and being told that this is where we drink tea and eat cod.

    Of course I'm not forcing you to do anything if you don't want to, but seriously, what have you got to lose? Five seconds of your life?

What is the biggest problem in today's world?

What is the biggest problem in today's world?

  • Racial discrimination

    Votes: 0 0.0%
  • Gender inequality

    Votes: 0 0.0%
  • Economic crises

    Votes: 5 29.4%
  • Lack of democracy

    Votes: 2 11.8%
  • Lack of QUILTBAG rights

    Votes: 2 11.8%
  • Nuclear weaponry

    Votes: 2 11.8%
  • Other (state what and why)

    Votes: 6 35.3%

  • Total voters
    17

Ether's Bane

future Singaporean
Pronoun
he
These are all things we have talked about in the past, and all are major issues, too, but which is the biggest problem in today's world, and why?
 
Last edited:
Racial discrimination, gender inequality and QUILTBAG rights are all different faces of the same polyhedron IMO, which has to do with prejudice in general.

I voted lack of democracy. If everyone was able to be heard and it made a difference, we wouldn't have people complaining about lack of rights.
 
... well, four of those come from a lack of equality (and I guess economic crises too, in part?).

ninja: well I guess nuclear weaponry is a threat that affects everyone, but I don't really know enough about it to determine whether that's actually a realistic threat or not. :P
 
Last edited:
I don't think classing social problems by relative badness compared to one another is very appropriate or productive. It just feeds into a "why are you complaining about X when Y is a much bigger problem!" mentality, to silence complaints about the "lesser" problems.
 
Resource scarcity and unequal distrubution thereof, which encourages a hostile, competitive evironment that produces most if not all forms of discrimination. I believe that a global command economy run by something like a superintelligent AI exploiting all avaliable resources (including those from beyond Earth) would create a post-scarcity society, and after the creation of such a society, egalitarianism naturally follows and flourishes.
 
People generally being assholes.

It's the start of most big problems. (general discrimination, lack of rights, dictatorships, ect.)
 
Imbalances in social power - between the state and the nation, between the wealthy and the poor, between the religious and the areligious, etc.
 
Well this discussion is going downhill just a little bit. I like what Eloi said a lot. I would probably take the superintelligent AI a bit further than just mediating economic problems, but would people for the most part be okay with robots essentially ruling over us?
 
エル.;569138 said:
Well this discussion is going downhill just a little bit. I like what Eloi said a lot. I would probably take the superintelligent AI a bit further than just mediating economic problems, but would people for the most part be okay with robots essentially ruling over us?

What if the super AI ends up like Skynet from Terminator?
 
@skepticism-about-convincing-people-to-accept-the-AI: If it were intelligent enough to singlehandidly fix the global economy, I'm sure getting people to go along with it would not be too hard.

@questioning-its-benevolence: There's no real reason why it would be omnicidal. Worst case scenario behavior wise is a lack of care for our problems to solve them in favor of say, trying to produce a unified model of physics and comprehensive chronology of the universe from its beginning if it even has one.

Well, the real worst case scenario is superintelligent AIs being practically impossible. But there's gotta be hope somewhere, nyeh?
 
All of the above.

More exactly, dehumanization. Certain human beings who think they are more human than other human beings for some inane reason like skin color, social class, sexual orientation, wealth, gender, nationality, family history, or whatever have you. People who don't seem to understand that a human is a human, we are in this boat together, we are on this planet together. People who dehumanize their peers for want of some silly perverse thing like money or power. If "They" are no longer human, so the reasoning goes, then it is ok to enslave, destroy, murder, rape, exploit, torture, or conquer Them, because They are a lower form of life than "Us." They who do not look like Us, who do not think like Us, who do not speak like Us, do not live like Us, are nothing like Us.

Quite disgusting to us as it may be, it is nothing new. This is how things have been for thousands of years.
 
@questioning-its-benevolence: There's no real reason why it would be omnicidal. Worst case scenario behavior wise is a lack of care for our problems to solve them in favor of say, trying to produce a unified model of physics and comprehensive chronology of the universe from its beginning if it even has one.

An AI could care passionately about something that conflicts with the things we care about. For example, the AI's ultimate goal could be to maximize the number of paperclips in the universe, and to this end it would disassemble all humans so that the minerals contained within them could be made into paperclips. Caring about the number of paperclips seems absurd and arbitrary to us, but caring about the pain of humans would seem absurd and arbitrary to the AI.
 
Last edited:
An AI could care passionately about something that conflicts with the things we care about. For example, the AI's ultimate goal could be to maximize the number of paperclips in the universe, and to this end it would disassemble all humans so that the minerals contained within them could be made into paperclips. Caring about the number of paperclips seems absurd and arbitrary to us, but caring about the pain of humans would seem absurd and arbitrary to the AI.

How would the AI become programmed that way in the first place though?
 
My point is that it's not a given that the worst-case scenario for an AI is that it leaves us alone. Morality is a complex thing. If you program an AI to search for the Higgs boson, say, it could decide the best way to get an adequate particle accelerator as quickly as possible is to threaten to exterminate humanity if they don't all drop what they're doing to build it. Or whatever. Nothing would inherently make the AI merciful; nothing would by default make it pay any more heed to whether we get hurt than we do to whether bacteria get hurt. And poorly thought-out attempts to program human moral principles into it could also end badly. What if we told it "Save human lives at any cost", and it figured the best way to achieve this would be to keep us all in sterile, empty chambers were we couldn't possibly get hurt? There is no simple instruction that would sum up all of human morality in a way that would make the AI act in a way we would genuinely recognize as moral.
 
Back
Top Bottom