Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PrettyTensor compatibility #4

Open
leotam opened this issue Mar 24, 2017 · 6 comments
Open

PrettyTensor compatibility #4

leotam opened this issue Mar 24, 2017 · 6 comments

Comments

@leotam
Copy link

leotam commented Mar 24, 2017

Really nice repo but appears prettytensor support is somewhat broken due to fast changing TF api

Some issues similar to google/prettytensor#46

And the data is a bit harder to locate now due to downed links. The attributes labels can be found here at least: https://s3.amazonaws.com/cadl/celeb-align/list_attr_celeba.txt

@timsainb
Copy link
Owner

Great thanks! I've since rewritten a bunch of this to do a number of things like getting rid of prettytensor, implementing deepmind's up-convolutions tweaks, but need to clean up the code, and reapply it to the faces dataset. I will try and update the repo as soon as I get a little bit of time!

@alexrakowski
Copy link

Is there any chance of the updated code being uploaded soon?
Also I would add links to the original CelebA site, because of quickly changing download URLs.

@timsainb
Copy link
Owner

timsainb commented May 18, 2017

Hi all, sorry I've been a bit swamped with research the past few months. I don't quite have time to rebuild and debug, but here are some quick updates you can implement. The examples here are from a different network with a slightly different architecture, so a few changes would be needed.

To get rid of pretty tensor in layer creation, replace the pt.wrap layers with tf.layers layers. For example:

def encoder(X):

    net = tf.reshape(X, [batch_size, dim1, dim2, dim3])
    net = layers.conv2d(net, 32, 5, stride=2)
    net = layers.conv2d(net, 64, 5, stride=2)
    net = layers.conv2d(net, 128, 5, stride=2)
    net= layers.conv2d(net, 256, 5, stride=2)
    net =  layers.flatten(net)
    net = layers.fully_connected(net, 4000)
    net = layers.fully_connected(net, 4000)
    net = layers.fully_connected(net, hidden_size, activation_fn=None)
    
    return net

To use the resize deconvolutions discussed in here, replace deconv with something like this

def generator(Z):
    net = layers.fully_connected(Z, 4000)
    net = layers.fully_connected(net, 4000)
    net = tf.reshape(layers.fully_connected(net, 4*4*256), [batch_size, 4,4,256])
    net = tf.image.resize_nearest_neighbor(net, (8,8))
    net = layers.conv2d(net, 256, 5, stride=1)
    net = tf.image.resize_nearest_neighbor(net, (16,16))
    net = layers.conv2d(net, 128, 5, stride=1)
    net = tf.image.resize_nearest_neighbor(net, (32,32))
    net = layers.conv2d(net, 32, 5, stride=1)
    net = layers.conv2d(net, dim3, 1, stride=1, activation_fn=tf.sigmoid)
    net = layers.flatten(net) 

In inference, instead of

 with pt.defaults_scope(activation_fn=tf.nn.elu,
                               batch_normalize=True,
                               learned_moments_update_rate=0.0003,
                               variance_epsilon=0.001,
                               scale_after_normalization=True):

You can use arg scope:

arg_scope([layers.fully_connected, layers.conv2d], activation_fn=tf.nn.relu)

If anyone implements this stuff we can pull in the new version! Sorry again for not updating sooner!

@alexrakowski
Copy link

alexrakowski commented May 18, 2017

The issues I was having were related to the zeros_initializer constructor having been updated.
The way I solved it was to simply replace it with None - it was only used for biases. I assume it's correct since I am able to train the network :)

Thanks for your response!

@GloryyrolG
Copy link

Hi all, sorry I've been a bit swamped with research the past few months. I don't quite have time to rebuild and debug, but here are some quick updates you can implement. The examples here are from a different network with a slightly different architecture, so a few changes would be needed.

To get rid of pretty tensor in layer creation, replace the pt.wrap layers with tf.layers layers. For example:

def encoder(X):

    net = tf.reshape(X, [batch_size, dim1, dim2, dim3])
    net = layers.conv2d(net, 32, 5, stride=2)
    net = layers.conv2d(net, 64, 5, stride=2)
    net = layers.conv2d(net, 128, 5, stride=2)
    net= layers.conv2d(net, 256, 5, stride=2)
    net =  layers.flatten(net)
    net = layers.fully_connected(net, 4000)
    net = layers.fully_connected(net, 4000)
    net = layers.fully_connected(net, hidden_size, activation_fn=None)
    
    return net

To use the resize deconvolutions discussed in here, replace deconv with something like this

def generator(Z):
    net = layers.fully_connected(Z, 4000)
    net = layers.fully_connected(net, 4000)
    net = tf.reshape(layers.fully_connected(net, 4*4*256), [batch_size, 4,4,256])
    net = tf.image.resize_nearest_neighbor(net, (8,8))
    net = layers.conv2d(net, 256, 5, stride=1)
    net = tf.image.resize_nearest_neighbor(net, (16,16))
    net = layers.conv2d(net, 128, 5, stride=1)
    net = tf.image.resize_nearest_neighbor(net, (32,32))
    net = layers.conv2d(net, 32, 5, stride=1)
    net = layers.conv2d(net, dim3, 1, stride=1, activation_fn=tf.sigmoid)
    net = layers.flatten(net) 

In inference, instead of

 with pt.defaults_scope(activation_fn=tf.nn.elu,
                               batch_normalize=True,
                               learned_moments_update_rate=0.0003,
                               variance_epsilon=0.001,
                               scale_after_normalization=True):

You can use arg scope:

arg_scope([layers.fully_connected, layers.conv2d], activation_fn=tf.nn.relu)

If anyone implements this stuff we can pull in the new version! Sorry again for not updating sooner!

Does anyone implement this arg_scope and is willing to share?

@shine0318
Copy link

Hi all, sorry I've been a bit swamped with research the past few months. I don't quite have time to rebuild and debug, but here are some quick updates you can implement. The examples here are from a different network with a slightly different architecture, so a few changes would be needed.
To get rid of pretty tensor in layer creation, replace the pt.wrap layers with tf.layers layers. For example:

def encoder(X):

    net = tf.reshape(X, [batch_size, dim1, dim2, dim3])
    net = layers.conv2d(net, 32, 5, stride=2)
    net = layers.conv2d(net, 64, 5, stride=2)
    net = layers.conv2d(net, 128, 5, stride=2)
    net= layers.conv2d(net, 256, 5, stride=2)
    net =  layers.flatten(net)
    net = layers.fully_connected(net, 4000)
    net = layers.fully_connected(net, 4000)
    net = layers.fully_connected(net, hidden_size, activation_fn=None)
    
    return net

To use the resize deconvolutions discussed in here, replace deconv with something like this

def generator(Z):
    net = layers.fully_connected(Z, 4000)
    net = layers.fully_connected(net, 4000)
    net = tf.reshape(layers.fully_connected(net, 4*4*256), [batch_size, 4,4,256])
    net = tf.image.resize_nearest_neighbor(net, (8,8))
    net = layers.conv2d(net, 256, 5, stride=1)
    net = tf.image.resize_nearest_neighbor(net, (16,16))
    net = layers.conv2d(net, 128, 5, stride=1)
    net = tf.image.resize_nearest_neighbor(net, (32,32))
    net = layers.conv2d(net, 32, 5, stride=1)
    net = layers.conv2d(net, dim3, 1, stride=1, activation_fn=tf.sigmoid)
    net = layers.flatten(net) 

In inference, instead of

 with pt.defaults_scope(activation_fn=tf.nn.elu,
                               batch_normalize=True,
                               learned_moments_update_rate=0.0003,
                               variance_epsilon=0.001,
                               scale_after_normalization=True):

You can use arg scope:
arg_scope([layers.fully_connected, layers.conv2d], activation_fn=tf.nn.relu)
If anyone implements this stuff we can pull in the new version! Sorry again for not updating sooner!

Does anyone implement this arg_scope and is willing to share?

sorry, have u solve this problem?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants