A confused otter’s mumblings and rumblings

Monday, 25th August, 2014

Innocent guilty pleasure

Filed under: Humans, Mammals, Pleasure — Otter @ 05:20

I walk by the bus station. A young woman is looking carefully over the timetable, knees slightly bent, hands on thighs to balance herself, the curves of her bottom discernable in the creases of her chinos. She glances around nervously, catches my eye as I continue past, inhaling briefly in shock, and turns back with embarrassment to the schedules, position unmodified. She is extremely pretty, and scared of her power.

I keep walking after one last look back. Transience is the only intransient, and soon I will forget her. She will become just a whisper of a dream.

And then I realise that was my bus stop too, the one I needed to get where I need to go. So I turn back, scared and glad.

And she is still waiting as I arrive, still nervous, running over to the timetable every few seconds to confirm something or learn something new about Hong Kong’s public transportation system. I watch her legs as she does so, marvelling that their colour and shape alone can be so erotic. 

By the fifth time she scurries, I laugh at her, too loudly; she probably notices, and scampers to the other side of the stop, movements so controlled, as though she is scared of some violation. Another bus comes, and this time she is gone. 

And now I am on my bus smiling, pleased by the hold of simple beauty, by innocent pleasures, and by the lightness I feel when I stop worrying and allow myself to just enjoy.


Monday, 21st May, 2012

Emails from a long time past

Filed under: Humans, Reading — Tags: , , — Otter @ 09:37

I thought that when I read the old emails I will start feeling guilty, because I never pass up an opportunity to feel guilty (after all, I would feel guilty if I did). This is why I avoided reading them until today, when curiosity and the urge to procrastinate overwhelmed my supertaut risk aversion. But instead I’m numb and confused [and so it turns out that the title of this blog is immeasurably more apposite than I could have guessed in my confused state when I started it]. Does this reaction devalue the times in which they were written, or the feelings I expressed in them and no doubt experienced in my life around them? I don’t think so. That was then and this is now. But I’d be really much more relaxed and satisfied if I could just work out what “now” indeed is…

Monday, 6th November, 2006

Lack of AI, lack of HI

Filed under: AI, Humans — Otter @ 20:00

This is probably a repeat. That just goes to show how right I might be. I know that might convince you. I know I can’t know that.

To the matter at hand: there is still no AI around. I don’t mean that there aren’t machines capable of solving problems explained to them in a language they can understand — that has been the case for decades with general-purpose computers, and arguably includes even the simplest machines if performing physical tasks through physical manipulation were not excluded — but rather that there is no machine capable of being given any problem to solve in a human language and going ahead and solving it. Only humans come close to achieving this goal.

Why is this? I believe, at the moment, (for all that my beliefs are worth, especially the transient ones), that it is due entirely to the arrogance of those humans. All the examples of supposed AI research that I’ve come across fail to realise that the algorithms the intelligent machine must follow should be flexible enough to deal with unforseen circumstances in a “rational” way, defined as relative to its goals (which I assume would be, given our needs, to solve the given problem — or, equivalently in my mind, to reach the desired defined goal — as “resourcefully” as possible, where, again, we define the preciousness of resources as we wish).

Unforeseen circumstances come about because of uncertainty about “the world”. This uncertainty, and how to deal with it, must therefore be considered when designing the algorithm to endow the machine with. This is not hard: Bayesian statistics, anyone?

That’s it. Comments welcome. Read this too, which I link to if only so that there’ll be a link to another blog in this post. Isn’t that one

Tuesday, 9th May, 2006

A question you must answer correctly

Filed under: Humans — Otter @ 03:17

Aren't humans just stupid, beautiful piles of carbon? (The correct answer is "yes", so you have no excuse now).

Friday, 10th February, 2006

Another title? You addict

Filed under: Humans — Otter @ 22:46

I’m ill today — or sick, if you prefer. It’s all semantics: the symptoms will probably be the same either way. So excuse the delusion: it’s not connected.

Lots of thoughts today on AI, as I defined it earlier in time. Goodness knows what any of them mean in practise. Bloody practice, always getting in the way. Why do we have to do anything any more? Why don’t we have assistants to do all the stuff we don’t want to? Anyone who thinks we’re rich is a fool — but anyone in the West who thinks they’re poor is just disgusting. I’d like to see them go to Africa without any money and blow their minds with real poverty, where human existence is totally different, and much less enjoyable for all that. If they turn out to like being infected with sundry diseases that in the Occident were dealt with decades ago, all well and good. It’ll be a very “long tail”, but I’m sure there’s someone out there. Maybe those that get to become kleptocrats.

I must stop a) complaining about things I can’t do anything about without the strong possibility of a myocardial infraction; b) dithering and blabbering; c) being ill/sick. None of these are likely to make me rich, and that’s what it’s all about, yes?

Sorry to write it in such a fashion (I can’t help it), but: Maybe not. Or at least not in the sense you, whatever you are, are very likely to have thought of, assuming you understand what the hell I’m talking about. You know the sense I mean, you comfortable idiot. For by developing AI for all tasks, we will be rich beyond compare. Nothing will cost us anything, and assuming that technology will have advanced to such a stage that we will be able to ask for anything we can dream of [including self-enhancements, you, which while a fine idea, are really only part of the “bigger picture” (you love those metaphors, don’t you, you ape? Yeah, you, reader. You. You’d support a fascist if they said the right metaphors at the right time, like an evil comedian, wouldn’t you? Wash your mouth out.)]. Although people will still complain and fuck (exclusively with other humans, if they can, mostly, probably, I hypothesise) and puke, they’ll be able to clean up the mess without any effort whatsoever.

Dystopia? Why? Because you want to control how other people behave, don’t you, even though it has nothing to do with you, right? You arrogant shit. Be an arrogant, patronising shit, fine, but at least admit it. I won’t think any less of you — it’s too late for that. It’ll just satisfy my silly hunger for honesty and accountability.

Wow, these tiny organisms are really affecting me, whatever I am. I keep forgetting about them, momentarily, because I can only think of one thing at a time. [Will be able to directly create AI with parallel processing powers? Or will we have to wait for our less functional AI to learn how to do it? I’m not taking bets, but you can give me stakes to look after, if you like].

I want to write something about writing, partly because I can, and partly because I know I can, and partly for other reasons [I think that covers all the reasons, as it were]. But I should do so in another post, for other reasons, or maybe the same ones by description. It’s really not that interesting. Move along. Continue your easy life dished to you on a non-existent plate. Go on. Shoo. We’ve reached the end.

But it took me so long to find that word (you know the one, literal literal smartass), that I can’t be bothered any more. One post should be enough. Do something useful until I tell you to stop. [And you’re unlikely to ever know if I wrote these words in the order you read them, no matter in what order you read them. Isn’t that wonderful, and scary, and dizzy-making? Who told you to stop doing something useful?]

Thursday, 9th February, 2006

Linky link link, blergh

Filed under: Humans, Miscellaneous, Reading — Otter @ 02:09

This is good shit, except for the Google posts, which are so mundane I wonder what the writer was smoking. If he wasn’t smoking or somesuch [you know what I mean twatface] then he has no excuse:

.did a touch make you less lonely

But ultimately, it’s still shit. Like I say, it’s not his fault, because he’s human, a silly mortal human.

I’m also linking because it’s Good To Link. I’m a servient otter. Don’t you believe it, whether you believe it or not. Hmm.

No title yet, but it might possibly be this

Filed under: AI, Humans — Otter @ 01:56

I think now that I got that point or so across, I can get on with some more “normal” writing. Just don’t say you haven’t been warned that you are at best reading something that will become very famous, at worst something that will destroy the world (or at least humanity) [the event with possibly the largest discrepancy between deserved and expected fame], and in some maximum/expected probability way, something that will not achieve anything of “significance”, which while being merely defined as its effect on humanity, subjectively defined, almost completely subjectively determined, is by definition “important”, “crucial”, influential… Without it, you’re masturbating. But I warned you of that, so you shouldn’t be surprised.

[Start the profiling now, if you like. He (and of course it’s a “he”) mentioned probability — so probably trained in probability. Obviously “intelligent”, though you probably don’t realise how little, because it’s relative to the average human, who isn’t that bright in absolute terms. Writes a blog: what a goldmine! Anonymously, yet tries to help his identification. Who is this freak? This self-deprecating freak. Why can’t he just shut up and do something “normal”? He already does; you’ll find it described by many as “living his life”. Who are you to judge me? Don’t you dare, you failure. I can manipulate you, theoretically, although my practice still leaves much to be desired. And I’m glad for it. I’m a nice person. I do care about other people, otherwise why would I have written all that aggressive crap? I’m trying to believe in something simpler, easier, easier. But I just can’t. I don’t believe any of it.]

And so on to my idea: artificial intelligence. Heard of it before? Yes, I decided to use a term you’ve heard of, though who knows if I mean it in the same way as that and those who said it to you before. Now you might be a total idiot, but that’s not your fault.. You’re just built that way. But given that, you’re quite smart. You can certainly read this, although you will make all sorts of assumptions whilst doing so, and you’re not really reading every word, are you? You’re likely to miss it if I put the same word at the end of a line and at the start of the next one. I mean, you’ll see the word, but only one instance of it. But you knew what I mean. See? Smart, but necessarily stupid.

Which somehow, according to me (and you’ll probably agree, you schmuck, you nebbish), brings us to artificial intelligence. Defining “intelligence”, without the quotes, as the ability to perform a task, and letting “artificial” mean created much more directly than usual by humans [Don’t forget it’s bollocks…], you’ll respond, were you both logical and outspoken about your logical conclusion, that we already have artificial intelligence. You’ll perhaps cite examples, examples which for example might include a vacuum cleaner. You’re lucky to be able to think of it and not need an actual, physical example of a vacuum cleaner at hand to make the point to some humans who matter in your scheme of thing. Appreciate it, and marvel.

So if we already have artificial intelligence, you are likely to continue in that cute human way of yours, what is my point? My point is, for the sake of argument, that current aritificial intelligence research focuses too much on creating very unintelligent artifices. What is needed is something much more intelligent (invoking the naive definition, not that you are likely to care, given that you’ve let so many others get away with worse, if it really is worse), which will be able to learn how to do things. Is that so hard? Why can’t we just get a program to sense things, make connections, and just make more connections, while also sensing new things, making yet more connections? We can, and I will work on it. It’s the only project worth working on, except, I argue, making sure I stay alive, which encompasses a wider range of activities on my and your part than you I believe you were likely to realise at first.

Wasn’t this all terribly interesting? I bet you feel better for having read it, if you did read it. If you didn’t, you really should. If there are any ways you can make me richer, you should do those too. Excellent.

Create a free website or blog at WordPress.com.