Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

output norm and different resutls #13

Open
GoogleCodeExporter opened this issue Mar 16, 2015 · 1 comment
Open

output norm and different resutls #13

GoogleCodeExporter opened this issue Mar 16, 2015 · 1 comment

Comments

@GoogleCodeExporter
Copy link

Hi there,
I just realized that the newff output is fixed to range [-1, 1] and I do the 
following to test how should output outside the range work.

import neurolab as nl
import numpy as np

# Create train samples
x = np.linspace(-7, 7, 20)
y = x * 10

size = len(x)

inp = x.reshape(size,1)
tar = y.reshape(size,1)

norm_inp = nl.tool.Norm(inp)
inp = norm_inp(inp)

norm_tar = nl.tool.Norm(tar)
tar = norm_tar(tar)

# Create network with 2 layers and random initialized
# as I normalized the inp, the input range is set to [0, 1] (BTW, I don't know 
how
#to norm it to [-1, 1])
net = nl.net.newff([[0, 1]],[5, 1])

# Train network
error = net.train(inp, tar, epochs=500, show=100, goal=0.02)

# Simulate network
out = norm_tar.renorm(net.sim([[ 0.21052632 ]]))

print "final output:-----------------"
print out
inp before norm
[[-7.        ]
 [-6.26315789]
 [-5.52631579]
 [-4.78947368]
 [-4.05263158]
 [-3.31578947]
 [-2.57894737]
 [-1.84210526]
 [-1.10526316]
 [-0.36842105]
 [ 0.36842105]
 [ 1.10526316]
 [ 1.84210526]
 [ 2.57894737]
 [ 3.31578947]
 [ 4.05263158]
 [ 4.78947368]
 [ 5.52631579]
 [ 6.26315789]
 [ 7.        ]]

tar before norm
[[-70.        ]
 [-62.63157895]
 [-55.26315789]
 [-47.89473684]
 [-40.52631579]
 [-33.15789474]
 [-25.78947368]
 [-18.42105263]
 [-11.05263158]
 [ -3.68421053]
 [  3.68421053]
 [ 11.05263158]
 [ 18.42105263]
 [ 25.78947368]
 [ 33.15789474]
 [ 40.52631579]
 [ 47.89473684]
 [ 55.26315789]
 [ 62.63157895]
 [ 70.        ]]

I expect the out to be around -40 after renorm for the input 0.21052632
but the results are not repeatable, sometimes is right (around -40) but 
sometimes is wrong (become -70).

I am wondering why the training results are not stable and is there a better 
way to train a nn that produce output value out range [-1, 1] 

Many thanks,
Derrick


Original issue reported on code.google.com by [email protected] on 28 Apr 2014 at 2:30

@GoogleCodeExporter
Copy link
Author

For get more stable train result in this samples, you may change train function 
to train_gdx. 
I think, train_bfgs algorithm is more powerful for this easy task.

Normalize to [-1, 1] now is not support. But you may write your function for 
this. I fix it in next relise.

Original comment by [email protected] on 30 Apr 2014 at 11:57

  • Changed state: Started

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant