Neural Network Modeling

I need to work today because I’ve unfortunately committed myself to making a company run. I left a comment about neural networks over at the Blackboard blog and like many other random things, I find the math of neural networks interesting. There was a comment made that ChatGPT wasn’t really paying attention to its own conversation. While this is not true, it is very close to true as the “paying attention” is simply a continual re-weighting of modeled neurons based on immediate history. Humans are very attentive to the moment, ChatGPT has a slower shift in its matrix. This won’t make sense to many but I’m guessing a couple folks might figure it out.

I’ve written a very simple neural network — back in like 1998. It gave a lot of insight into how these conversations and various other AI things are created. Two years ago, I started a new software to create a distributed network that would perform similarly. I got into the weeds of observed human and animal neurons and found that our study of them is so limited that we don’t really know what they do.

For instance, in an AI neural model there is a strong negative feedback created that can travel through many of it’s structured layers. Would you believe that this feedback does not really seem to exist in nature? Nothing I found in neural research showed this effect. There may be a tiny bit of back emf between synapses but it’s really small.

No learning feedback? Hard to believe.

There is a jackalope who used to post at tAV calling himself ‘deepthought’ On other sites over the years, I’ve discovered that he is working on the google AI programs. When questioned with detail, he vanishes. Everything published regarding neural network learning has heavy feedback. What prompted my comment was the statement that the network isn’t listening to its own conversation. It most certainly is, but the weight of the conversation is manually set by act or more likely by artifact of the total structure. Basically, it cannot change as quickly based on the experience of an actual conversation.

The whole reason the thing is public, is to weight its network based on human interaction. — aka learning.

AI Generated Image of Neural Network

14 thoughts on “Neural Network Modeling

  1. I just had the concept of time based weight changes in neural networks. The feedback should change dramatically based on how prescient the event is. I wonder if that is the key? Humans don’t learn well after a certain input set. You can teach them completely untrue ideas and they cannot shake them over time.

    I’m going to think about this.

    1. Thanks – just left this comment at Lucia’s:

      I did similar the other day, asking it to calcualte pi to 1000dp. It couldn’t stop at the right number of dp depsite 3 tries in a row. Each time it very politely apologised before declaring “here is the correct answer:”
      It is constrained by its training data and knows it (if the dialogue output of it actually means something…), but it doesn’t then have the ability to frame its answers within an uncertainty which respects this constraint.
      Watched and enjoyed this the other day – recommended by a pal working on AI:

      Couple of years old now, but new to me.

        1. AI is a bit dumber than people think right now. That will change.

          My wife chewed my ass out for my neural network interests. It was a pretty high compliment actually! She’s afraid that I will succeed.

  2. “That will change.”
    Yep! Lots of money going into it. Have you seen the Alpha Go film? Worth a watch – well made, not mad tech content. FWIW watched it with my wife, who is non tech, and we both enjoyed it.
    Think I tend to agree with Mrs ID…

  3. The topic of neural nets and how they relate to human thought is interesting. I suspect that humans and animals in general tend to resist “new think” after some time as well. I think it is natures version of “if it ain’t broke, don’t fix it”. The unfortunate part of this is that the initial training set up through the teenage years is critical and is likely subject to the “Garbage In, Garbage Out” phenomina taught in CompSci 101.

    I also think that the training set has different time lengths based on biological development.

    Caveat: I did have a CompSci degree but have never done much with AI beyond reading and thinking.


    The financial cost of putting together a home-brew “AI” language simulator is now cheaper than the cost — even neglecting inflation — of a home-brew computer in the 1980s or a home-brew 3D printer just a decade ago. It’s gonna get weird.

    The best UseCase I can imagine is proofreading actual human texts — comparing predictive interpreted or interpolated text to the text OCR’d in or even typed, then highlighting mismatches for human review. AutoCorrect on steroids. A lot of the 99 cent eBooks on Amazon could greatly benefit from such copy editing. The most annoying UseCase I can think of immediately is all the spam robo-calls for auto warranties, solar panels, credit card relief … being so improved one doesn’t quite recognize them as robots as easily. The most likely UseCase is, on the evidence of prior technology leaps, porn. Gonna put a lot of voice-actors on phone lines out of a job. Same problem for psychic-hotlines, suicide counseling, and MoviePhone — not just screens and showtimes but reviews!

    I’m wondering about sort of joint-intelligence or augmented intelligence applications.

    Remember the Pixar movie *UP* where the dogs all wear collars that allow them human speech? Now, in real life a dog has diction of — is able to understand and correctly react to — a few hundred words — maybe upto a 1000 for a genius level dog. There’s a limited number of topics that engage a dog’s interest. And there are a few responses that a dog can make to input that a human will willingly accept. (“I will now piss a little bit on this interesting thing” is not among such responses…) Seems to me a polygraph set of sensors to figure out whether the dog is getting messages in and how the dog is responding back could, via language modeling AI, actually implement such a speech collar.

    Not just for dogs. A text-to-speech thing of the sort Stephen Hawking has used, the past few decades, could provide audio that’s a lot more expressive, and interpret a user’s inputs a lot more generally. I can imagine a sort of hearing aid that interprets incoming text or speech and “finger spells” a touch-sensation into the palms of the utterly deaf. Why not?

  5. “I suspect that humans and animals in general tend to resist “new think” after some time as well.”

    I know of very few humans changing their ideas after time. Even when presented with facts. exempli gratia — masks

    I also know of very few humans who would bother to look up the meaning of e.g.

        1. There’s a “Like” button.
          Off the top of my head, I think that the last time “data” changed my mind were the studies (sooo long ago) that showed 2nd hand smoke did not cause cancer. Counterintuitive.
          Also, there was a study in the last year or so that showed that “Flossing” had very little impact on dental health. I haven’t looked into that yet but again, counterintuitive.

  6. The science of treating public water with fluoride has moved more rapidly than public policy; but questioning decisions of a legislature or public agency made on the basis of the state-of-the-art circa 1965 will get you looked at funny. Don’t even TRY to start asking questions about “Stannous Fluoride” (the metal, Tin Sn50) versus “Sodium Fluoride” (Na11) uptake in bio-systems. And if you think it’s funny that consumer products companies sell toothpaste with both Stannous Fluoride, known to stain teeth, AND Carbamide Peroxide, to abraid then bleach away dental stains — well, who died and left YOU a gently-used, second-hand dental degree, eh?

    Similarly don’t dare suggest that the Catholic Church of 1968 was properly exercising the Precautionary Principle in calling Birth Control Pills “intrinsically wrong” and dangerous to women’s health.

    Then there are non-scientific, simply more complete, records that should revise people’s opinions, but never do. Those who hated Chambers and supported Hiss have proven over nearly three decades to be impervious to documentation from the VENONA files.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s