Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mean prediction method for the GPLVM #532

Open
neildhir opened this issue Aug 3, 2017 · 2 comments
Open

Mean prediction method for the GPLVM #532

neildhir opened this issue Aug 3, 2017 · 2 comments

Comments

@neildhir
Copy link

neildhir commented Aug 3, 2017

Apologies in advance if this is not the right place to ask this sort of thing, and if this is all available in the docs. Have not managed to find it.

I am curious as to my implementation of `mean-prediction' of Gaussian process dynamics models (GPDM), which is the same as for the Gaussian process latent variable model (GPLVM). Basically, I am not quite sure I've done the right thing. I want to use this for synthesising new observation space examples.

Here's some copy and paste from section 3.1 of the paper. I will be using a MWE for the GPLVM -- but they use the same mean-prediction method.

For both 3D people tracking and computer animation, it is desirable to generate new motions efficiently. Here we consider a simple online method for generating a new motion, called mean-prediction, which avoids the relatively expensive Monte Carlo sampling used
above. In mean-prediction, we consider the next time step $\tilde{x}$ conditioned on $\tilde{x}_{t-1}$ from the Gaussian prediction:

And here follows equations (17) and (18).

This is a GPy implementation of how I think it should be done.

import GPy
import np

Y = [IMPORT SOME TRAINING DATA OF CHOICE]
D = Y.shape[1]    
Q = 10 # dimensionality of the latent space
N = 1000 # number of time-steps in the training data
sample_mean = np.mean(Y[:N,:])
sample_std = np.std(Y[:N,:])
sample_norm = (Y[:N,:] - sample_mean)/sample_std # normalisation
kern = GPy.kern.RBF(Q, ARD=True)
m = GPy.models.GPLVM(sample_norm, input_dim=Q, kernel=kern, init='PCA')
m.optimize(messages=1, max_iters=10) 

Or just grab one of the GPy examples from here, they should do the same thing.

A simple function for prediction then uses the trained GPLVM:

def prediction(n):
    # n :: number of examples to synthesise 

    # get latent space 
    X = m.X
    Kx = m.kern.K(X[0:N-1,:])
    Kx_inv = np.linalg.inv(Kx)
    N_synt = n # number of time-steps we want to simulate
    X_synt = np.zeros((N_synt,Q))
    X_last = X[-1:]
    
    # Mean prediction method
    X1 = X[:N-1,:]
    X2 = X[1:N,:].T
    def meanPrediction(X_last):
        k_x = m.kern.K(X1,X_last)
        meanX = np.dot(np.dot(X2,Kx_inv),k_x).flatten() 
        return meanX
    
    # Get latent-space predictions
    for i in range(N_synt):
        X_synt[i,:] = meanPrediction(X_last.reshape(1,Q))
        X_last = X_synt[i,:]
        
    # Synthesised observations        
    Y_synt = np.zeros((N_synt,D))
    Ky = m.kern.K(X)
    Ky_inv = np.linalg.inv(Ky)
    Y_t = sample_norm.T
    k_y = m.kern.K(X,X_synt)
    k_yy = m.kern.K(X_synt,X_synt)
    
    meanY = np.dot(np.dot(Y_t,Ky_inv),k_y)
    covY = (k_yy - np.dot(np.dot(k_y.T,Ky_inv),k_y))
    for i in range(D):
        Y_synt[:,i] = np.dot(covY,np.random.randn(N_synt)) + meanY[i,:]    
    
    # Shift back to un-normalised space
    Y_synt_denorm = sample_std * Y_synt + sample_mean

    return Y_synt_denorm
@lawrennd
Copy link
Member

lawrennd commented Aug 8, 2017 via email

@neildhir
Copy link
Author

neildhir commented Aug 8, 2017

Well, it should be the same mean-prediction function for the GPLVM as well (lest I have misunderstood something).

I can take a stab at a pull request, but still need to know if this is correct or not :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants