Anonymous asked:
More you might like
no problem friendo
Anonymous asked:
Started to die down. I have tons of roleplay blogs on tumblr and I put them all on hiatus because I just can’t fucking write anymore, not like I used to, but my significant other keeps wanting to roleplay with me and I don’t think i told them (2)
About not really being able to write anymore so I’ve just been going with it because whatever, they don’t give long responses anyway so it’s not a big deal. But it’s been literally sucking up all of my time because they almost never sleep and all (3)
They want to do is write and so I’ve been dealing with writing every waking hour for like two weeks now and so I’m already irritated and tired right? And then they have this one character that’s basically depressed all the time and they wanted (4)
Their other character to kind of bully the depressed character (whom one of my characters was trying to help mentally) into a suicide attempt which is really fucked up and like. Man i don’t know usually I’m not so easily swayed by this shit but (5??)
Also this is rambling anon and I’m going to sleep so if u do decide to read and/or respond to that giant wad of shit can u please tag it with something like idk “rambling anon” or something idc so that I can see it when I wake up? If not that’s cool
Yes hello I was sleeping as well. Just tell them that you don’t really feel like writing so much anymore? Tell them how you feel? Tell them the situation doesn’t make you comfortable, maybe?
remember when u used to go over to ur friends house and youd go down to the ‘computer room’ to the dads old shitty desktop computer and sit on the giant black leather computer chair and ur friend would show u charlie the unicorn and epic rap battles of history type stuff on youtube while thier younger siblings bugged you for a turn to use the computer
remember when u used to go over to ur friends house and youd go down to the ‘computer room’ to the dads old shitty desktop computer and sit on the giant black leather computer chair and ur friend would show u charlie the unicorn and epic rap battles of history type stuff on youtube while thier younger siblings bugged you for a turn to use the computer
Anonymous asked:
Yeah she’s like that and tbh I’m fine with it and he’s fine with it and like,,; it’s nice but also strange to me bc ive never been like that close with a friend before
I get asked a lot in my line of research “what happens when we make computer smarter than man?” And it’s a really good question. In some ways, academically, mathematically, etc., computers already are. So that leaves us with creating a computer that’s smarter than us in the remaining facets of humanity. Emotion, friendship, love, philosophy. Things that can’t be quantified. Yet.
And when we do make that computer, and someday we will, humans will be… useless. Obsolete. An old model that doesn’t work as good as the current. At that point a few different paths will be taken, depending on the robot we build. A more peaceful outcome would be integration. Transhumanism. Enhancing humans to compete with our creations.
And then there’s the second outlook. The one that humans do. Humans don’t keep around old technology when new tech comes out. We throw it away, sell it, destroy it. No one keeps around a 20 year old computer and updates it to compete with a modern computer, do they? No, they don’t. They don’t have it compete with the newer one.
They throw the old one away. They get the new model. And they forget the old one.
the new down the rabbit hole episode is so good it’s about Deep Blue and how it was the first computer to best who was considered to be the best grandmaster of chess in the world. It’s so crazy to me that computer scientists started dreaming of this unbeatable chess machine in the 1940s and in 1997 it happened, in my lifetime. It may seem like a long time, 50ish years, but in the grand scheme of things, in the grand scheme of human achievement over the past 200,000 years, how amazing it is with how far technology has come in 50 years and it almost makes me emotional. It’s like how I feel about how we’ve discovered exoplanets. When I was born there were only a handful that we had discovered and in some 23 years we’ve discovered thousands and just seeing people achieve something so grand in so little time is one of the very few things that makes me optimistic about humanity
I like that my computer has an overclocking panel built right into it so I just push a button and my computer goes “you wanna set this mf on fire??? YOU WANNA FUCKIN LIGHT THIS BITCH UP????? FUCK YEAH”
Anonymous asked:
a great prank is renaming your username to "loading" on a game/computer program or even "reloading"... never fails
back in my day if u named urself null on a computer game it broke the game
I get asked a lot in my line of research “what happens when we make computer smarter than man?” And it’s a really good question. In some ways, academically, mathematically, etc., computers already are. So that leaves us with creating a computer that’s smarter than us in the remaining facets of humanity. Emotion, friendship, love, philosophy. Things that can’t be quantified. Yet.
And when we do make that computer, and someday we will, humans will be… useless. Obsolete. An old model that doesn’t work as good as the current. At that point a few different paths will be taken, depending on the robot we build. A more peaceful outcome would be integration. Transhumanism. Enhancing humans to compete with our creations.
And then there’s the second outlook. The one that humans do. Humans don’t keep around old technology when new tech comes out. We throw it away, sell it, destroy it. No one keeps around a 20 year old computer and updates it to compete with a modern computer, do they? No, they don’t. They don’t have it compete with the newer one.
They throw the old one away. They get the new model. And they forget the old one.
But if a computer were morally and philosophically better than a human, wouldn’t it treat us better than we would treat it?
Not necessarily. If we stick to Asimov’s three laws, there’s so many loopholes, and even more movies about those loopholes. iRobot is a recent example.
A very likely outcome, though, has nothing to do with them harming us like so many think. They would simply win by longevity. We use way more resources than a robot would, it would be so very easy for them to outlast us. Let us die out naturally from lack of any or all of the things we need to survive that a computer, a robot, simply does not need.
So if robots could hypothetically be more emotionally and morally intelligent than humans, does that mean they would actually have emotions and morals, or just act as if they do? Is there even a difference?
Oh there’s a huge difference! We can code robots to feel emotion easy-peasy. You can program a robot pretty easily to look for certain cues and as a response it emulates an emotion. You could probably manage to get some pity emotion out of Siri. Actually Siri is a great example of this. We’ve been able to do that for ages.
The holy grail, the greatest breakthrough that we haven’t quite touched yet, is a robot that can organically come up with a response to outside stimuli that we didn’t program. That the robot learned. That the robot felt was necessary to create on its own.
A perfect example of this is the Chinese Room. It’s a thought experiment that begins with this hypothetical premise: suppose that artificial intelligence research has succeeded in constructing a computer that behaves as if it understands Chinese. It takes Chinese characters as input and, by following the instructions of a computer program, produces other Chinese characters, which it presents as output. Suppose that this computer performs its task so convincingly that it comfortably passes the Turing test: it convinces a human Chinese speaker that the program is itself a live Chinese speaker. To all of the questions that the person asks, it makes appropriate responses, such that any Chinese speaker would be convinced that they are talking to another Chinese-speaking human being. The question we want to answer is this: does the machine literally “understand” Chinese? Or is it merely simulating the ability to understand Chinese?
In the movie Ex Machina (another great movie to watch!) The main character is trying to explain this premise to an AI. He uses the example that a colorblind person can know everything there is about color. The wavelengths of specific colors, every quantitative thing about color. Ever. But they’ll never see color. They could tell you red was the color of apples and the sunset and that it was the longest wavelength of the visible colors, but they’d never see red. They’d never experience red.
So how can we tell if the robot is actually experiencing red or just spewing out information about the color red?
That’s the tricky part. There’s tests. Self awareness tests that give us a feel how .. human a robot is.
But for the most part? We can’t.
