(Research/Personal) Projects Ideas

Hi, I was wondering if anyone had any cool ML projects ideas that they would be willing to share. Mainly for my own personal projects but depending on size of the project, I’d be open to doing it with others!
My main interests is in computer vision, multi modal systems, generative models, recurrence models, using ML in mobile apps (I’m interested in fields like RL/GNNs too but I don’t have a lot of experience in them to be honest).

In particular, I’m looking for projects like: Vision-Language Project Ideas

I have a RTX3070 for compute but I may be able to get more.

Project Proposal: Optimizing Mixture of Experts (MoE) Models for Machine Translation

Executive Summary

This proposal outlines a visionary project aimed at enhancing the efficiency, adaptability, and performance of Mixture of Experts (MoE) models, specifically tailored for machine translation tasks. Leveraging cutting-edge approaches in routing algorithms, efficiency metrics, and collaboration with the broader AI research community, this project seeks to redefine the benchmarks for MoE model capabilities. By focusing on machine translation as a primary use case, we aim to develop a scalable, efficient model that not only demonstrates significant improvements in computational efficiency and accuracy but also sets a new standard for AI models’ adaptability and effectiveness.

Project Objectives

  1. Develop an Advanced Routing Algorithm: Create a dynamic, adaptive routing algorithm using reinforcement learning, evolutionary algorithms, or predictive models to efficiently manage data flow within the MoE architecture, ensuring optimal expert utilization with minimal overhead.

  2. Establish Comprehensive Efficiency Metrics: Define and implement specific metrics to gauge efficiency gains, including effective throughput, energy efficiency, and cost efficiency, alongside traditional metrics like FLOPs, parameter count, and memory utilization.

  3. Create a Scalable Machine Translation MoE Model: Utilize the enhanced routing algorithm and efficiency metrics to build an MoE model focused on machine translation, providing a clear benchmark for performance and efficiency improvements.

  4. Foster Collaboration and Open Innovation: Engage with the AI research community through open-source contributions, publications, and collaborations, leveraging external expertise and fostering a collaborative development environment.


  5. Routing Algorithm Brainstorming and Development:

    • Evaluate potential approaches for the routing algorithm, including reinforcement learning, evolutionary algorithms, and predictive models.
    • Develop a proof of concept for the most promising approach, focusing on real-time learning capability, low complexity, and compatibility with sparse activation.
  6. Efficiency Metrics Specification:

    • Define detailed efficiency metrics tailored to machine translation tasks, considering normalization for task-agnostic applicability and specifying metrics based on the target deployment environment (single GPU setup).
  7. Baseline Establishment and Benchmarking:

    • Conduct a comprehensive literature review and engage with existing open-source libraries to establish a performance baseline for current MoE models in machine translation.
    • Benchmark the new MoE model against these established baselines to demonstrate efficiency and performance improvements.
  8. Collaborative Development and Open Source Engagement:

    • Identify potential collaborators through literature review and open-source project contributions.
    • Establish a collaborative framework for ongoing development and innovation, including public repositories, discussion forums, and regular updates to the AI research community.

    Target Tasks and Datasets

  • Primary Task: Machine Translation, chosen for its clear, measurable performance metrics and the availability of robust datasets for benchmarking.
  • Initial Datasets: Focus on the WMT (World Machine Translation) benchmarks, providing a diverse and challenging set of language pairs and translation contexts.

Hardware Goals and Deployment Targets

  • Initial Development and Testing: Single GPU setups, widely accessible for development and scalable to cloud inference environments.

  • Long-term Vision: Adaptability to various deployment scenarios, including specialized hardware and constrained environments, ensuring broad applicability and efficiency.

    Expected Outcomes

  • A highly efficient, adaptive MoE model for machine translation that sets new benchmarks for computational efficiency and translation accuracy.

  • A dynamic routing algorithm that significantly reduces computational overhead, optimizes expert utilization, and adapts in real-time to evolving data patterns.

  • Establishing a model development and benchmarking framework that can be adapted to other AI tasks, promoting efficiency and adaptability across the AI landscape.

  • Strengthening the collaboration between academia, industry, and the open-source community, driving forward the innovation and applicability of MoE models.


This project represents a bold step forward in the optimization of Mixture of Experts models, focusing on machine translation to demonstrate significant advances in AI model efficiency, adaptability, and performance. Through innovative routing algorithms, comprehensive efficiency metrics, and a collaborative approach to development, we aim to redefine what’s possible with MoE models, setting new standards for the field.