Skip to content

This's an implementation of deepmind Visual Interaction Networks paper using pytorch

License

Notifications You must be signed in to change notification settings

Godpeterliu/visual-interaction-networks-pytorch

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

21 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Visual-Interaction-Networks

An implementation of Deepmind visual interaction networks in Pytorch.

Introduction

For the purpose of understanding the challenge of relational reasoning. they publised VIN that involves predicting the future in a physical scene. From just a glance, humans can infer not only what objects are where, but also what will happen to them over the upcoming seconds, minutes and even longer in some cases. For example, if you kick a football against a wall, your brain predicts what will happen when the ball hits the wall and how their movements will be affected afterwards (the ball will ricochet at a speed proportional to the kick and - in most cases - the wall will remain where it is).

Architecture

Data

I used I@jaesik817 physics engine to generate the data.

Just run the physics_engine.py

Usage

Main Dependencies

Python 3.5
pytorch 0.3
numpy 1.13.1

RUN

  • Edit configration file to meet your need.
  • Run vin.py

References

About

This's an implementation of deepmind Visual Interaction Networks paper using pytorch

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%