Over the past few weeks Iโve been experimenting with and doing some deep learning and researching into neutral networks and evolutionary adaptation of them. The thing is I havenโt gotten very far. Iโve been able to build two different approaches so far with limited results. The frustrating part is that these things are so โrandomโ it isnโt even funny. Like I canโt even get a basic ANN + GA to evolve a network that solves the XOR pattern every time with high levels of accuracy. ๐
#zy4an6a
(#zy4an6a) This is one of my attempts:
$ go build ./cmd/xor/... && ./xor
Generation 95 | Fitness: 0.999964 | Nodes: 9 | Conns: 19
Target reached!
Best network performance:
[0 0] โ got=0 exp=0 (raw=0.000) โ
[0 1] โ got=1 exp=1 (raw=0.990) โ
[1 0] โ got=1 exp=1 (raw=0.716) โ
[1 1] โ got=0 exp=0 (raw=0.045) โ
Overall accuracy: 100.0%
Wrote best.dot โ render with ```dot -Tpng best.dot -o best.png`
#kx273oq
(#zy4an6a) @bender@twtxt.net There is no aim. Just learning ๐ That way I can actually speak and write with authority when it comes to these LLM(s) a bit more ๐คฃ Or maybe I just happen to become that random weirdo genius that invents Skynetโข ๐
#q6rlraq