微软亚洲研究院推出的 “星跃计划” 科研合作项目再次上新!来自微软亚洲研究院Beijing Lab和Vancouver Lab的三个联合科研项目招募开启:
Multimodal AI
Large Language Models for Real-World Optimization
LLM-Empowered Knowledge Production and Consumption
项目聚焦大语言模型,欢迎大家关注与申请!
星跃计划旨在为优秀人才创造与微软全球多个研究团队一起聚焦真实前沿问题的机会。通过本项目,你将在来自微软亚洲研究院两个lab顶尖mentor的联合指导下,在国际化的科研环境中、在多元包容的科研氛围中,做有影响力的研究。加入 “星跃计划”,和我们一起跨越重洋,探索科研的更多可能!
星跃计划开放项目将持续更新,请及时关注获取最新动态!
星跃亮点
同时在微软亚洲研究院多个lab顶级研究员的指导下进行科研工作,与不同研究背景的科研人员深度交流
聚焦来自于工业界的真实前沿问题,致力于做出对学术及产业界有影响力的成果
通过线下与线上的交流合作,在微软了解国际化、开放的科研氛围,及多元与包容的文化
申请资格
硕士、博士在读学生(具体参考项目要求);延期(deferred)或间隔年(gap year)学生
可全职在国内工作 6-12 个月
项目详细要求详见下方项目介绍
▼
还在等什么?
快来申请吧!
Multimodal AI
点击此处向上滑动阅览
Research Internships at Microsoft provide a dynamic environment for research careers with a network of world-class research labs led by globally-recognized scientists and engineers, who pursue innovation in a range of scientific and technical disciplines to help solve complex challenges in diverse fields.
We are a team at Microsoft Research that spans both the General Artificial Intelligence group (formerly Natural Language Computing group) and Microsoft Research Asia – Vancouver lab, focusing on Multimodal AI research. We work on innovative research projects and exciting challenges in multimodal foundation models, generative AI techniques, document intelligence, and more. Our team has contributed influential open-sourced research that advances the multimodal capabilities in Large Foundation Models and General AI. These contributions have also been widely applied to Microsoft products and services.
We are seeking talented and motivated research interns to join us in cutting-edge Multimodal AI research. As a research intern, you will develop generalizable technologies that enhance the multimodal capabilities of AI models. You will collaborate with a pioneering team of world-class researchers across our Vancouver, Beijing, and Redmond labs to push for real-world applications. Ideal candidates will have a background in Computer Vision, Natural Language Processing, Machine Learning, and/or Document Understanding. We value your ideas and unique viewpoints and believe that our partnership will shape ambitious and impactful research contributions.
Responsibilities
Analyzing model behavior and optimizing models to achieve better accuracy, efficiency, and robustness in various applications.
Proposing and experimenting with innovative methods to enhance the multimodal capabilities of AI systems.
Collecting and curating multimodal datasets or benchmarks.
Conducting evaluations on multimodal capabilities.
Presenting findings at internal meetings and top-tier venues.
Qualifications
Required Qualifications
Major in Computer Science or a related STEM field.
Research Interns are expected to be physically located in a Microsoft worksite location for the duration of their internship.
Preferred Qualifications
Current knowledge of deep learning concepts.
Proficient analytical, problem-solving, and communication skills.
Experience publishing academic papers in the field of Artificial Intelligence.
Impact-driven mindset with the ability to work and learn in a collaborative and diverse environment.
Coding proficiency in deep learning frameworks and experience in training and evaluating deep learning models, e.g., large language models, multimodal models, diffusion models.
Large Language Models for Real-World Optimization
点击此处向上滑动阅览
Join our pioneering research team to work on harnessing the power of Large Language Models (LLMs) to address complex real-world optimization problems requiring long-term planning and dynamic information gathering from environments. Traditional optimization techniques often struggle with the high dimensionality, dynamic nature, and intricate dependencies inherent in real-world settings.
Addressing these challenges, our research aims to push the boundaries of LLM capabilities to automate the decision-making processes, improve reliability, and provide innovative solutions to both existing and classical optimization challenges. The successful candidate will have the opportunity to collaborate with world-class researchers and engineers from diverse backgrounds and expertise, access to state-of-the-art computational resources, and contribute to the advancement of LLM research and its impact on real-world optimization problems.
Qualifications
Conduct cutting-edge research on the application of LLMs to real-world optimization problems.
Develop and implement novel methodologies to improve the performance of LLMs in dynamic and complex environments.
Collaborate with cross-functional teams to integrate advanced AI models with traditional optimization techniques.
Design experiments and simulations to test new hypotheses and validate the effectiveness of LLM-driven solutions.
Publish research findings in top-tier conferences and journals, and present results to both technical and non-technical audiences.
Required Qualifications
Currently enrolled in a master's, or PhD program in CS, EE, ML, Mathematics, or a related field.
Proficient analytical and problem-solving skills
Proficiency in Python, C/C++, and other programming languages.
Experience with Linux and development on Linux platforms.
Excellent communication and presentation skills.
Ability to work independently and collaboratively in a dynamic research environment.
Preferred Qualifications
Familiarity with optimization techniques and models.
Experience with machine learning frameworks (e.g., PyTorch, TensorFlow).
Knowledge of multi-agent systems and Active Learning.
Experience with LLMs and their applications in dynamic and complex environments.
Strong publication record in top-tier conferences and journals.
Active contribution to open-source projects on platforms like GitHub.
How to Apply
LLM-Empowered Knowledge Production and Consumption
点击此处向上滑动阅览
Knowledge is essential for identifying issues, accelerating remediation, and enhancing existing infrastructure in large-scale systems. However, there is a knowledge gap due to the lack of easily consumable, vast infrastructure data. Because the data is immense and dynamically evolving. Large-language and multi-modal models have created opportunities to better support knowledge production and consumption, from gleaning new insights to extracting entities and generating signatures from unstructured data at scale, as demonstrated in recent research. In this project, we aim to leverage these models to automate and accelerate raw data processing, build knowledge graphs, and connect them to gain a deeper understanding of system infrastructure.
We’ll work with scientists who are at the forefront of system and network research, leveraging the world-leading platforms to solve the challenges problems in this area. The current project team members, from both MSRA Vancouver and MSR Redmond labs, have rich experience contributing to both industry and academic community through transferring innovations that support production systems and publications at top conferences.
Qualifications
Major in computer science, electrical engineering, or equivalent field
Solid knowledge of data structure/algorithm
Familiarity with Python, C/C++ and other programming languages, familiar with Linux and development on Linux platform
Good communication and presentation skills
Good English reading and writing ability, capable of system implementing based on academic papers in English, capable of writing English documents
Preferred Qualifications
Rich knowledge of machine learning and machine learning models
Have some basic security knowledge and participated in one security-related projects.
Familiarity with engineering process as a strong plus
Active on GitHub, used or participated in well-known open-source projects
申请方式
符合条件的申请者请填写下方申请表:
https://jsj.top/f/LwjRie
或扫描下方二维码,立即填写进入申请!