OpenAI is venturing deeper into the world of AI-generated music, developing a tool that can create original compositions from both text and audio prompts, according to a report by The Information. The project, still in its early stages, reportedly involves collaboration with students from The Juilliard School, one of the most renowned performing arts institutions in the world. These students have been assisting OpenAI in developing high-quality training data, including annotating music scores to help the model better understand rhythm, melody, harmony, and composition styles.
The company’s vision is to build an advanced generative music system that can produce everything from instrumental accompaniments and background scores to complete original songs based on minimal user input. For instance, users could soon generate a custom guitar riff, a vocal backing track, or even a cinematic soundtrack just by describing it in words or providing a short audio sample.
While OpenAI has previously explored AI audio generation, none of its earlier experiments have reached the level of refinement or scale seen in its text and image tools like ChatGPT and DALL·E. This new project could mark a major step toward integrating music generation into OpenAI’s broader creative ecosystem.
If successful, the tool could position OpenAI as a strong competitor to existing players in the space, such as Suno, Udio, and ElevenLabs, all of which are racing to define the future of AI-powered sound and music. However, the emergence of these technologies also raises important debates about copyright, originality, and the impact of automated content on human artists and the music industry.
As AI and creativity continue to merge, OpenAI’s latest project underscores how the boundaries between human artistry and machine intelligence are rapidly evolving — potentially ushering in a new era where anyone can compose professional-grade music with the help of artificial intelligence.


































