A platform to discover, discuss, and read arXiv research papers. It allows you to search for topics, join communities, and manage bookmarks and likes for papers.
Discover new and relevant papers through personalized recommendations based on your interests and previous interactions.
Engage with the research community by discussing papers, sharing insights, and gathering diverse perspectives right on the platform.
Search for topics, research ideas, or specific papers using a comprehensive and intuitive search bar.
Allows users to explore and discover new research papers on arXiv based on recommendations, helping users find relevant papers in their fields of interest.
Enables users to join communities and participate in discussions related to specific research topics, fostering collaboration and knowledge sharing among researchers.
Users can bookmark and like research papers to save their favorite papers for quick access later and show appreciation for valuable work.
A powerful search tool that allows users to search for topics, research ideas, or specific papers, making it easier to navigate through a vast collection of scientific content.
Displays metrics such as views and likes on research papers, providing insights into the popularity and impact of a paper.
Allows users to read, discover, and discuss the latest research directly on alphaXiv. The platform is created by researchers who aim to make academia more open, accessible, and connected.
The platform is proudly supported by arXivLabs, facilitating collaboration and innovation in academic research.
Includes partnerships with organizations like the Allen Institute for AI, Cohere, The Unjournal, together.ai, and Akash to enhance the research discussion experience.
Allows users to download a PDF version of the paper for offline reading and reference.
Users can leave public comments or write private notes that only they can see, facilitating discussion and personal annotation.
Authors can claim the paper as their own to manage and update their profiles and information related to the paper.
Users can bookmark papers to easily access them later from their profile.
The model performs report generation, abnormality classification, and visual grounding across single and multi-turn interactions using over 1 million curated image-instruction pairs.
RadVLM introduces dialogue-based chest X-ray interpretations, enhancing diagnostic capabilities with structured CXR interpretation and conversational AI.
Achieves high-level performance in diagnostic tasks compared to existing vision-language models, while remaining competitive in other radiology tasks.
Provides insights into the benefits of joint training across multiple tasks for scenarios in limited annotated data, increasing the model’s effectiveness and accessibility.
A significantly improved version of AlphaGeometry that solves Olympiad geometry problems by extending the language to tackle harder problems involving movements of objects and problems with linear equations of angles, ratios, and distances. It improves coverage rate on International Math Olympiads (IMO) geometry problems from 66% to 88%.
Enhances AlphaGeometry2 with better language modeling and a knowledge-sharing mechanism, resulting in an improved solving rate of 84% for all geometry problems over the last 25 years.
Allows AlphaGeometry2 to solve geometry problems reliably and directly from natural language input as part of a fully automated system.
OmniHuman-1 introduces a unified framework that scales up training data while handling multiple conditioning signals such as text, audio, and pose. This results in superior performance across portrait, half-body, and full-body animation tasks.
The model maintains high-quality motion synthesis with a single model, allowing for detailed and nuanced animations.