• Home
  • News
  • 2020 PFN Global Internship Program Call for Applications

News

2020 PFN Global Internship Program Call for Applications

Preferred Networks (PFN) will be organizing an internship program for summer/autumn 2020 in our Tokyo headquarters.

2019.12.16

※ This internship program is for students who are enrolled in a university or research institute outside of Japan, or have recently graduated. For students who are in Japan, we’ll announce our Domestic Internship in next March.

Index

Overall

What is PFN’s Global Internship Program

Preferred Networks (PFN) will be organizing an internship program for summer/autumn 2020 in our Tokyo headquarters.

PFN’s central goal is to make the real world computable. This, of course, is an enormous challenge requiring world-class research. As part of this research effort, we also want to involve extremely talented students to join us in realizing this goal.

As a part of our previous internships, many of our interns have published their work in top-tier conferences such as ICLR, ICML, NeurIPS and ICRA.

We are looking for self-motivated, energetic interns who are eager to boldly explore where no one has ventured before. As an intern, you will be working closely with our teams to conduct research in one of the following five topics (refer to [Topics for This Year] section) that are focused on industrial applications.

About PFN

PFN is a company specializing in the industrial application of machine learning. As the leading AI startup company in Japan, PFN employs over 270 specialists with the aim of developing new machine learning technologies and cutting edge solutions to challenging real-world problems. (https://preferred.jp/en/)

Why you should intern at PFN?

PFN provides interns with an attractive research environment.

  • Variety of Expertise
    There are talented and experienced researchers from various backgrounds as well as software engineers who have developed the deep learning framework, Chainer, and relevant software libraries including CuPy, Optuna, Menoh, and Chainer libraries. However, PFN’s ambition is not limited to software, but we also have hardware researchers and engineers working on the next generation of chips, clusters and robots. We are conducting interdisciplinary research with these experts in various research fields such as robotics, networking, bio, and chemistry.
  • Powerful Computing Resources
    PFN operates three clusters with a combined computing power of 200 PetaFLOPS equipped with 1024 V100 GPUs, 512 V100 GPUs, and 1024 P100 GPUs, respectively. One of the clusters is ranked and 1st in Japan (12th in the world) among industrial supercomputers in the TOP500 List (http://www.top500.org) in 2017. These clusters are utilized for our research activities, including ImageNet 15min in 2017 and won 2nd prize at object detections in Google Open Images Challenge 2018.
  • Access to High-end Robots
    In our robotics research, we use a large number of mobile robots including Toyota HSR (Human Support Robot).  We are developing various hardware and software to make the operation of industrial manipulators more efficient.
  • Past achievements (selection)
    cf.  Best Paper Award Finalist in ICRA 2019
          Best Paper Award on Human-Robot Interaction in ICRA 2018
          Honorable Mention Award in CHI 2019
          Honorable Mention Award in CHI 2018
          3rd prize at Kaggle Google AI Open Images Challenge 2019 (Instance Segmentation track)
          2nd prize at Kaggle Google AI Open Images Challenge 2018 (Object Detection track)
          Training ImageNet in 15 min (2017)
          2nd Prize at Amazon Picking up Challenge (2016)

Publications from Previous Internships

  • Einconv: Exploring Unexplored Tensor Decompositions for Convolutional Neural Networks, Kohei Hayashi et al., NeurIPS 2019.
  • A Graph Theoretic Framework of Recomputation Algorithms for Memory-Efficient Backpropagation, Mitsuru Kusumoto et al., NeurIPS 2019.
  • Robustness to Adversarial Perturbations in Learning from Incomplete Data, Amir Najafi et al., NeurIPS 2019.
  • Other than the above, 4 papers in NeurIPS 2019 workshops.
  • Dynamic Task Control Method of a Flexible Manipulator Using a Deep Recurrent Neural Network, Kento Kawaharazuka et al., IROS 2019.
  • Dynamic Manipulation of Flexible Objects with Torque Sequence Using a Deep Neural Network, Kento Kawaharazuka et al., ICRA 2019.
  • A Wrapped Normal Distribution on Hyperbolic Space for Gradient-Based Learning, Yoshihiro Nagano et al., ICML 2019.
  • Learning Discrete Representations via Information Maximizing Self Augmented Training, Weihua Hu et al., ICML 2017.
  • Neural Multi-scale Image Compression, Ken Nakanishi et al., ACCV 2018.
  • Distantly Supervised Road Segmentation, Satoshi Tsutsui et al., ICCV Workshops 2017.

Internship Period

  • Earliest start date: Aug 17th, 2020
  • The period of the internship can be flexibly arranged as following:
    •  Start date you can chose: Middle August, Late August or Early September
    •  Ending date you can chose: Middle November, Early to Late December or later than that.
  • A minimum duration of twelve weeks.

Back to Top

Minimum Required Experience & Skills

  • Formally enrolled in a master’s or PhD program at a university or research institute outside of Japan at the time of application and during the internship.
    Note:

    • While we expect PhD students to apply, exceptional master students are also encouraged to apply.
    • A graduate certificate is required at the time of acceptance of the internship offer to start visa application
  • Fluent in either English or Japanese.
  • Able to work full-time at our Tokyo office for the duration of the internship period.
  • Additionally, please see each topic description for concrete requirements.

Back to Top

Topics for this year

Topic 1: Next-generation chip architecture for deep learning

Description

Deep learning requires extreme amounts of compute and comes with huge challenges. At PFN, we aim to provide world-class solutions to enable our engineers and researchers to compute more with less. This is exemplified by our current generation MN-Core accelerator, providing 524TFlop/s half-precision per card while consuming less than 1 W per Teraflop in half-precision. If you’re into deep learning, but call computer architecture/engineering, semiconductor technology or related fields your home, come and research the next-next generation computer architecture for deep learning with us. Let’s pave the way for the future of deep learning research!

Required Experience & Skills

  • Knowledge and skills: 
    • Understanding of deep learning algorithms and popular neural network architectures. Big plus if the person has experience in optimizing computer architectures towards such networks.
    • Practical validation skills to show the actual merit of proposals (RTL coding, use of synthesis tools, solid understanding of chip design, usage of simulation environments, solid understanding of timing and power prediction tools and theory, etc.)
    • A solid understanding of architecture or architecture related research (processor-like / RISC-V, reconfigurable computing, domain-specific architectures/languages, applied semiconductor research e.g. post-silicon, compute architecture related research, especially in regard to deep learning applications)
  • Solid publication record: First author publication(s) in related conferences, big plus for publications in DAC / DATE / ASP-DAC / ICCAD, MICRO / ISCA, ISSCC, SC / ISC, VLSI, … 
  • Big plus: Actual design experience

Back to Top

Topic 2: SLAM, SfM, and Depth Estimation

Description

At PFN, we want to ‘make the real world computable’, i.e., enable computational gadgets to interact with the real world. Understanding the geometric structure of the environment is a crucial ingredient for achieving that goal. In particular, localization, mapping, and depth understanding are fundamental capabilities that are necessary for autonomous robots/applications to successfully perform real world tasks. In this internship, we are looking for talented and motivated candidates who want to push the research frontier in these areas forward.

Recent works such as [1] show that deep learning infused with traditional optimization strategies can improve existing localization and mapping methods. In contrast, [2] and its predecessors formulate a completely new deep learning based approach for monocular visual localization. Finally, we are also seeing unmatched performance in depth estimation by novel deep learning based methods [3]. Motivated by this, we want to challenge, improve, and innovate in these domains with skilled interns.

Reference:

[1] Unsupervised Collaborative Learning of Keyframe Detection and Visual

Odometry Towards Monocular Deep SLAM, Lu Sheng et al, ICCV 2019

[2] From Coarse to Fine: Robust Hierarchical Localization at Large Scale, Paul-Edouard Sarlin et al, CVPR 2019

[3] Digging into Self-Supervised Monocular Depth Prediction, Clément Godard et al, ICCV 2019

Required Experience & Skills 

  • Previous experience with research: The candidate is expected to have worked on projects which highlight his/her ability as a researcher. This includes projects which have led to publications in top-tier conferences such as: CVPR, ICCV, ECCV, ICLR, ICML, NeurIPS, ICRA, IROS.
  • Strong coding skills: The candidate is expected to have experience in implementing computer vision projects in Python/C++. This includes familiarity with Deep Learning libraries such as TensorFlow, PyTorch, or Chainer.

Preferred Experience & Skills

  • Knowledge: Strong understanding of:
    • Deep Learning (such as deep architectures for depth estimation, semantic understanding), 
    • Classical Computer Vision (related to SLAM, SfM, stereo matching etc.), 
    • Optimization Methods (such as bundle adjustment, Iterative Closest Point).
  • Awareness of current trends: Knowledge of recent methods working towards fusing Deep Networks with Classical Methods, such as CNN-SLAM, or View Extrapolation with Multi Plane Images.

Back to Top

Topic 3: NN-based fast physics simulation

Description

The goal of our company is to “make the real world computable” by improving our understanding of the real world and using the knowledge to regulate/optimize the systems. In order to achieve this goal, we need a good simulator for various real-world processes, ranging from macroscopic dynamics like climate change to atomic interactions. While the development of all these simulators are equally difficult in their own ways, they all share the same challenge; the problem of ill-definedness, or the problem of model selection. The only way to resolve this problem is to introduce further inductive biases or regularization functions. We are looking for an ambitious intern who is interested in developing innovative methods of simulation to tackle this problem—either in specific areas of science or in simulation more generally. This internship project will be supported by our powerful computing environments, allowing us to run massively parallel simulations or heavy first-principles calculations.

Area of interests: Protein folding, Chemical reactions, Geological science, Material science, Plant control

Required Experience & Skills

  • Solid publication record as the first author in one or more areas of interest

Preferred Experience & Skills

  • Solid understanding of statistical learning / machine learning
  • Background knowledge of the target area
  • Strong Coding skills: The candidate is expected to have excellent skills in Python/C++ implementation, possibly demonstrated in his/her GitHub projects.

Back to Top

Topic 4: Machine learning models with inter-domain transferability

Description

With deep models, big-data, and high-performance computers, today’s advanced supervised learning methods allow us to perform well on the domain from which the training data was obtained. On the other hand, models trained with classical supervised methods often generalize poorly when making out-of-domain predictions. The inconvenient truth is that, in real-world applications, models are often used in the ways there were not originally intended, and are often fed with out-of-domain datasets. For instance, it is unlikely that all CT scan images are taken under the exact same measurement condition with the exact same machine, and reinforcement learning algorithms suffer from simulation-to-reality gap.

Recently, this problem is drawing greater attention in the machine learning community, and we also believe that this is an important problem in our goal of “making the real world computable.” Recent efforts include the development of models that are capable of fast adaptation /domain-transfer/Sim2Real. During this internship, you will have the opportunity to apply these various techniques on PFN’s many robots. We are looking for a self-motivated, energetic intern who is interested in exploring innovative ways to tackle this problem.

Required Experience & Skills

Solid publication record at top-tier AI conference such as ECML, ICML, ICLR, AISTATS, ICRA, IROS, ICCV, ECCV, CVPR, and NeurIPS as the first author

Preferred Experience & Skills

  • Solid understanding of statistical learning and deep learning
  • Implementation experiences of DL in relevant areas

Back to Top

Topic 5: High-Performance Data Communication Network for Parallel and Distributed Deep Learning

Description

Preferred Networks (PFN) builds and operates an in-house supercomputer with our dedicated accelerator devices, called MN-Core, as well as general purpose graphics processing units (GPGPUs) for parallel and distributed deep learning. We are tackling various research challenges to achieve scalable, efficient, and operable deep learning infrastructure. This year’s internship program focuses on two key challenges: 1) data communication network enhancement, and 2) global system optimization of deep learning infrastructure.

Data communication networks are an indispensable component for communicating between multiple accelerators over the training process, and feeding data from storage to computing nodes. A research challenge is to enhance the data communication network to accelerate the parallel and distributed deep learning. Our focus includes remote direct memory access (RDMA) for low latency and high throughput and in-network computing for collective operation offload.

Global system optimization of shared resources such as storage and network I/O is also challenging in a multitenant system. This includes the improvement of the job scheduler and resource allocation (e.g., job packing and placement) for heterogeneous jobs, efficient resource utilization such as speculative execution in training data copy, and traffic engineering.

We are looking for a self-motivated intern who takes on these research challenges with enthusiasm.

Preferred Experience & Skills

  • Knowledge
    • Fundamental knowledge on data communication networks
    • Understanding of data communication and collective operations in parallel and distributed deep learning
  • Skills: System programming experience
  • Solid publication record: Publication(s) as the first or corresponding author in related conferences, big plus for publications in ACM/IEEE Supercomputing Conference (SC), International Supercomputing Conference (ISC), ACM SIGCOMM, CoNEXT, USENIX ATC, NSDI

Back to Top

Selection Process & Schedule

  • Application with required documents
  • Previous task before interview depends on topics
  • 1~ 2 rounds of online interviews (includes online coding test depends on topics)
  • Schedule:
    ※All the date and time are in Japanese Standard Time (JST: UTC+9)

    • Application period: 18:00p.m., Dec 16 2019 (JST) to 12:00p.m., March 30 (JST)
    • Selecting period: 

Application period

CV Review Result Notification

Interview Period

Final Result Notification

18:00 Dec 16, 2019 (JST)
~
23:59 Jan 19, 2020 (JST)

Fed 7, 2020
(JST)

From Feb 10, 2020
(JST)

Till Mar 30, 2020
(JST)

00:00 Jan 20, 2020 (JST)
~
23:59 Feb 16,  2020 (JST)

Mar 13, 2020
(JST)

From Mar 16, 2020
(JST)

Till Apr 20, 2020
(JST)

00:00 Feb 17, 2020 (JST)
~
12:00 Mar 30,  2020 (JST)

Apr 10, 2020
(JST)

From Apr 13, 2020
(JST)

Till Apr 20, 2020
(JST)

  • Relocation 
    • visa application: from mid-May (visa process takes around 2~3 months)
    • other procedures for transportation, accommodation, etc.

Back to Top

How to Apply

  • Required documents:
    • Resume / CV (PDF format only. Please make sure your resume / CV does not contain  any personal or private information except name, email address, and affiliation)
    • A list of your publications
    • A research statement
      • Please also send a research statement that demonstrates a solid grasp of the field and understanding that showcases your ability to improve the state-of-the-art within the internship timeframe.
      • Content: Choose one important problem, explain why this problem is important and outline how you would solve that problem.
      • However, please note
        • If you write a new statement for something you’d like to do at PFN, this is also welcome, however, please keep it separate from your current (main) academic research to prevent complications with your University after the internship. 
        • This research statement is solely used to judge your ability to find interesting and impactful questions in the domain. 
    • GitHub account (if any)
  • To apply:
    • Please fill the google forms (←click to apply) and submit your application  
    • Deadline: 12:00 p.m., March 30th Monday, JST
      • No later applications will be accepted
      • Expected time to obtain a working visa  for Japan is between 2-3 months

Back to Top

Working Environments & Benefits

Working time & Location

  • 8 hours per day, 5 days per week (excluding national holidays)
  • Location: Preferred Networks Tokyo office: Otemachi Bldg. 3F, 1-6-1, Otemachi, Chiyoda-ku, Tokyo, Japan 100-0004
  • https://preferred.jp/en/company/
  • Contact: intern2020-admin@preferred.jp

Benefits

  • Reimbursement of actual expenses for round trip from/to your home 
  • Residential and living expense support in addition to salary
    • Note: Interns need to arrange accommodation by themselves. PFN staff can assist in providing relevant information.

Back to Top

Contact

Contact us here.