Senior Research Engineer - Large Language Models



This job is no longer accepting applications

See open jobs at Synthesia.
London, UK
Posted on Friday, January 5, 2024

Who are we?

On a mission to make video easy for anyone …

It is an exciting time to join Synthesia as we reached a hallmark by becoming a Unicorn, having raised $90 million in Series C funding and now evaluated at $1 billion!! ✨ 🦄

Synthesia is the world’s #1 AI video generation platform. Well, it’s actually a video production studio — in a browser. As in, no cameras or film crews at all. You simply choose an avatar, enter your script in one of 60 languages, and your video is ready in minutes. In Synthesia, you can build personalised on-the-fly videos, give your chatbot a human face or run 24/7 weather channels in different languages, to name just a few of the possibilities. 🎬

We believe the future of media is synthetic, and we are on a mission to turn cameras into code and make everyone a creator. To learn more, check out our brand video that explains what we’re doing at Synthesia.

About the role

We are looking for an experienced Research Engineer, with passion for working on cutting edge problems that can help us create highly realistic, emotional and life-like synthetic humans through text-to-video.

Our aim is to make video content creation available for all - not only to studio production!

🧑🏼‍🔬 You will be someone who loves to code and build working systems. You are used to working in a fast-paced start-up environment. You will have experience with the software development life cycle, from ideation through implementation, to testing and release.

👩‍💼 You will join a group of more than 50 Engineers in the R&D department and will have the opportunity to collaborate with multiple research teams across diverse areas, our R&D research is guided by our co-founders - Prof. Lourdes Agapito, Prof. Matthias Niessner and Director of Science Vittorio Ferrari.

If you know and love LLMs, comfortable with getting your hands dirty and digging in deep, or have experience fine tuning LLMs with additional modalities - then we would love to talk to you. We will also want to talk to you - if that's what you dream of doing. 🤩

What will you be doing?

🚀 Our NLP team is working on creating a virtual director that will automatically generate the best video and best avatar performance based on the context of a user script. Very much like the real world where a director can help create a powerful performance by directing the actors and guiding other aspects of the video. The team is focused on extracting cues from a large pool of proprietary video and audio data in order to train a virtual director. The findings within the NLP team are very important in driving the direction for other teams within the R&D.

In this position, you'll join the team to help train LLMs for text or multi-modal tasks that will help us create human-like avatars and effective powerful videos. You will also help us create high quality, production ready code and take ownership of production pipelines. This would include:

  • Assist in finding and collecting appropriate data for the tasks.
  • Make sure we have correct pipelines in place to ingest data from existing processed data, text, voice and video and adjust granularity.
  • Analyze and pre-process data if necessary.
  • Help define and run experimentation to facilitate predicting signals from different architecture choices of the adapted LLM.
  • Take hypotheses, ideas or papers and create implementations of solutions on the back of these
  • Run experiments on the back of the created implementations that evaluate and verify hypotheses
  • Contribute to a high-quality codebase, develop tests where necessary, and participate in code reviews.
  • Help a team of research engineers, scientists and linguists to write clean and efficient code, and define best practices
  • Develop production quality software, our teams work “full stack” with the ability to deliver direct to product and impact the world.

Who are you?

  • You have a background in Computer Science and 5+ years of relevant industry experience.
  • You have experience processing and analyzing large amounts of data.
  • You have substantial experience working with and training Large Language Models on sizable clusters, applying this knowledge to practical scenarios.
  • You have a good understanding of how transformer models work, without the need to rely on existing libraries.
  • You demonstrate excellent Python coding skills and have a deep interest in the software development aspect of ML.
  • You possess a research-oriented mindset, constantly exploring new techniques and solutions, and are adept at finding what works best in a given scenario.
  • And finally..You have excellent verbal and written communication skills and you are passionate about what you do!

Nice to have…

  • Advanced expertise in PyTorch, with a deep understanding of its distributed and parallel approaches, as well as the related tools and frameworks.
  • Proficiency in state-of-the-art frameworks, with proven experience leveraging these tools to optimize and accelerate ML model training and inference: DeepSpeed, ZeRo, LoRA, Quantization, 3D parallelism, distillation, Bitsandbytes.
  • Extensive experience with AWS, including a strong understanding of its various services and how they can be integrated for robust, scalable ML solutions.
  • Experience with different modalities (image / audio).

The good stuff...

💸 You will be compensated well (salary + stock options + bonus)

📍 You will work in a hybrid setting with an office in London

🚲 You get a cycle to work salary sacrifice scheme to commute to the office

🏝 You get 25 days of annual leave + public holidays

🏥 Your will get Private Health Insurance🥳 You will join an established company culture with regular socials and company retreats

👉 You can participate in a generous referral scheme

🚀 You will have huge opportunities for your career growth

You can see more about Who we are and How we work here:

This job is no longer accepting applications

See open jobs at Synthesia.