NCWIT Announces 2026 AiC Collegiate Award Recipients

AiC_Collegiate_CenterStacked_Color.png

The NCWIT Aspirations in Computing (AiC) Program is pleased to announce the 2026 NCWIT AiC Collegiate Award recipients, celebrating 12 undergraduate and graduate students from 12 academic institutions nationwide. The award recognizes technical contributions to projects demonstrating a high innovation level and potential impact.

The multi-tiered award structure includes Winners and Honorable Mentions.

The entire NCWIT AiC program platform is supported generously by Apple. AiC also receives support for specific national program elements; the NCWIT AiC Collegiate Award is sponsored by Bank of America.


Winners


Promise Ekpo | Class of 2028 at Cornell University

▶️ MARLHospital & FairSkillMARL: Fairness-Aware AI for Emergency Teamwork

Preventable teamwork failures in hospitals lead to over 200,000 U.S. deaths each year, often as fatigued clinicians struggle to coordinate tasks under pressure. To address this, Promise developed MARLHospital, a multi-agent reinforcement learning simulator that models emergency teamwork and clinician fatigue. The platform allows hospital protocols to be customized and supports both research and education. Building on this, they introduced FairSkillMARL, a framework that balances workload among team members while matching tasks to individual skill levels. This approach recognizes that fairness in emergency care is not always about equality—overusing experts or misassigning tasks can increase risk. FairSkillMARL integrates fairness as a core learning objective, enabling agents to develop efficient and equitable coordination strategies. Through extensive experiments using advanced reinforcement-learning algorithms, she demonstrated that moderate fairness weighting can double team success and reduce workload inequality by about 40 percent. Healthcare collaborators at Weill Cornell Medicine's Emergency Care Center and Northwell Health New York confirmed the simulator’s realism and potential as a training tool. The project is now expanding into Fair-GNE, exploring fairness as a constraint within generalized Nash equilibria. This work not only advances fairness-aware teamwork in healthcare but also has implications for fields like manufacturing, air-traffic management, and disaster response.


Hana Winchester | Class of 2027 at Ohio State University

Machine Learning Models for Early Detection of Solar Array Arcing in Space Environments

Solar array arcing poses a critical threat to spacecraft in plasma-rich, high-radiation environments, where sudden discharges can severely damage photovoltaic cells and jeopardize power systems. NASA Glenn Research Center’s MAI TAI (Mitigating Arc Inception via Transformational Array Instrumentation) project seeks to both analyze arcing behavior and develop an active mitigation system using predictive algorithms and adaptive circuitry. During two NASA research internships, Hana created the first reproducible machine-learning and signal-processing pipeline to convert large, noisy, multi-channel arc waveform datasets into reliable data for arc prediction. Her work enables early detection of arcs—essential for triggering real-time hardware responses such as voltage adjustments or rapid switching—using only the electrical signals available on flight hardware. She engineered a multi-stage preprocessing pipeline in Python, applying techniques like Gaussian smoothing, drift correction, and a custom spike-detection algorithm, which became the project’s standard for arc onset detection. With clean data, they built and trained machine-learning systems—including Random Forests and LSTMs—to forecast arc inception. The resulting models achieved high predictive accuracy, providing the foundation for real-time, autonomous arc prevention and enhancing the resilience of future deep-space missions.


Crystal Yang | Class of 2029 at the University of Pennsylvania

▶️ Audemy

A daily Wordle game with classmates highlighted for them how easily blind students are excluded from educational activities. This experience inspired Crystal to address the systemic lack of accessible learning tools. Audemy is their solution: an audio-first educational platform designed for blind and visually impaired students. Audemy leverages AI-powered audio to teach vocabulary, logic, math reasoning, and multi-step problem-solving, allowing students to learn independently without visual interfaces. The platform’s core is an adaptive audio learning framework that uses natural-language interaction and semantic audio labeling to translate visual elements into structured sound. By interviewing blind students and educators, Crystal continually refined Audemy to meet real classroom needs. Today, Audemy features over 50 audio-based educational games and reaches more than 200,000 blind learners across all U.S. states and 138 countries. Audemy has been recognized by outlets such as the White House Press, Fox News, and NPR. Above all, the project aims to close the educational accessibility gap, empowering blind and visually impaired students with the tools for academic progress, confidence, and independence.

Check out Audemy's games and more information here: https://audemy.org/


Honorable Mentions 


Navya Annapareddy | Class of 2025 at the University of Virginia


▶️ A New NICU: Improving Infant Developmental Outcomes With State of the Art Digital Twins


One in ten infants worldwide are born preterm, facing increased risks for lifelong health challenges and often requiring specialized care in Neonatal Intensive Care Units (NICUs). Infants in the NICU are at a significantly higher risk for developmental conditions like cerebral palsy (CP), yet only a small subset receive specialized assessments such as the General Movement Assessment (GMA). While GMAs are clinically accurate, most infants do not receive them due to resource constraints. To address this gap, Navya developed a computer-assisted GMA solution using computer vision and machine learning. Leveraging over 300 recorded GMA videos of preterm infants, she built a model that identifies abnormal movement patterns with high sensitivity (92%) and specificity (96%) compared to clinician assessments. The system extracts joint positions from video, calculates motion features used in GMAs, and predicts results, also highlighting which joints contribute to risk—an advancement beyond current methods. This end-to-end approach enables comprehensive, accessible developmental screening and personalized feedback for high-risk infants. The model facilitates early triage and supports the creation of a "digital twin," allowing infants to receive advanced developmental assessments remotely.



Skylar Chan | Class of 2033 at the University of Maryland School of Medicine


▶️ Quantum State Fidelity for Functional Neural Network Discovery 


Skylar led a team developing quantum computing (QC) algorithms to accelerate the discovery of functional neural networks (FNNs) in the brain. Traditional similarity metrics often miss subtle yet meaningful neuron relationships, so he introduced quantum-state fidelity as a novel metric. The team encoded neuron tuning curves—responses to auditory stimuli—into quantum states and measured their similarity using quantum circuits. This approach enabled the identification of unique FNNs in high-dimensional neural data, validated on 76 neurons from a mouse auditory cortex. The team constructed multiple network representations, including minimum spanning tree and top connections by similarity. Quantum-derived networks showed statistically distinct structures and competitive neuroscientific relevance compared to classical methods, with different spatial organizations and network properties. This research is the first of its kind to apply quantum state fidelity to uncover functional brain connectivity. Their work builds a foundation for applications like improving neuroprosthetics, enhancing seizure prediction, and enhancing early detection of neurodegenerative diseases. Ultimately, this approach emphasizes impactful clinical significance, supporting clinical applications in neurosurgery and neurology, including drug development and potential Brain-Computer Interfaces.



Maria Cuevas | Class of 2026 at Columbia University


▶️ Exploring the Capabilities of Astrophotonics for the Precise Alignment of Segmented Telescopes


Direct imaging allows astronomers to study exoplanet atmospheres, but it demands giant telescopes—typically constructed from many smaller mirror segments. These segments can shift, blurring images and making it harder to detect planets. While adaptive optics systems help correct distortions in real time, their wavefront sensors often struggle to detect the subtle misalignments unique to segmented mirrors.


To address this, Maria investigated a novel wavefront sensor based on photonic lanterns—specialized optical fibers that channel light into multiple paths. Using Python, she simulated both the photonic lantern and segmented telescope mirrors, developing algorithms to test the lantern’s ability to sense and correct misalignments. She then validated the approach on a laboratory optical bench, demonstrating that photonic lanterns reliably detect and help correct segment misalignments. This advancement supports astronomers and instrument designers aiming to build larger, more precise telescopes for direct imaging. Ultimately, Maria’s work contributes to clearer images, better atmospheric measurements, and the ongoing search for Earth-like, potentially habitable worlds beyond our solar system.



Preetal Deshpande | Class of 2027 at the University of California - Berkeley


▶️ OncoEquity


OncoEquity was co-founded to address a critical gap in cancer research: the lack of diverse genetic representation in drug development. More than 80% of genetic data used for cancer therapies comes from people of European descent, leaving Black, Indigenous, Latin American, and Asian populations at greater risk of receiving less effective treatments. As Chief Technology Officer, Preetal led the creation of an AI-driven platform that detects, measures, and corrects ancestral bias in early-stage cancer drug development datasets. The platform’s core AI engine evaluates pharmaceutical and clinical research datasets for representation gaps across ancestral groups, identifying missing populations and the impact on biomarker discovery and therapeutic target selection. To address these inequities, she developed the Validation Score, a clear metric of representational fairness, and built augmentation pipelines that enrich datasets with high-quality genomic profiles from global sources. This enables researchers to integrate ancestry-aware response predictions into preclinical decision-making. Early results show a 60% improvement in drug discovery accuracy. OncoEquity aims to make ancestry a tool for precision, not a barrier to care, ensuring no community is left behind by scientific progress.



Marina Lin | Class of 2029 at Harvard University


A Novel Carbon-Aware Ant Colony System Algorithm for the Sustainable Generalized Traveling Salesman Problem


Transportation is the leading source of greenhouse gas emissions in the U.S., with cars and trucks accounting for the majority. To address this, Marina developed a novel Carbon-Aware Ant Colony System Algorithm that introduces an emission hyperparameter and draws inspiration from ant colony behavior. This innovative framework generates cost-effective, environmentally friendly pathways and is the first sustainable algorithm to solve a longstanding open math problem from 1969. Her algorithm quantifies the trade-off between carbon emissions and cost, demonstrating efficiency and scalability across large transportation networks. It considers emissions from various vehicle types and can be adapted for commercial airlines. Using UPS data, she optimized delivery routes, achieving an average 2% reduction in distance and 3% lower carbon emissions, all while ensuring vehicles return to their starting points—a vital factor for real-world logistics. These advances enable companies to reduce operational costs and support energy-efficient logistics globally, resulting in significant CO2 savings. Her research also supports policy initiatives, such as the U.S. Department of Transportation’s emission reduction goals, and can inform global efforts to promote low-carbon infrastructure and carbon-aware routing in national transportation strategies.



Ayushi Mehrotra | Class of 2029 at Caltech


▶️ H-Sets: Hessian-Guided Discovery of Set-Level Feature Interactions in Image Classifiers


H-Sets addresses a fundamental challenge in artificial intelligence: understanding why deep learning models make their decisions. When a neural network classifies an image, it is crucial to know which parts of the image influenced the outcome, especially in high-stakes fields like medical diagnosis or autonomous vehicles. Existing methods like Integrated Gradients focus only on individual pixels, missing how groups of pixels—such as a bird's head composed of eyes, beak, and feathers—interact to inform the model's choices.


Ayushi developed H-Sets to capture these complex set-level interactions. The method uses the Hessian matrix to identify pairs of interacting pixels, then expands these into larger, meaningful groups using a search algorithm with constraints to ensure efficiency. Each pixel group’s importance is scored with an adapted technique called Integrated Directional Gradients, leveraging concepts from game theory to ensure trustworthy explanations. She overcame major computational challenges, separating the detection and scoring phases for efficiency and rigor. After 400+ hours of development and testing on large datasets and multiple neural networks, H-Sets delivers focused, faithful explanation maps. The work was recognized at NeurIPS 2024 and submitted to CVPR 2026, where it outperformed all baseline methods on key metrics.



Meenakshi Nair | Class of 2029 at Carnegie Mellon University


▶️ Iterative Informal Settlement Detection and Mapping on VHR Satellite Imagery


Nearly 1.1 billion people currently live in slums (informal settlements) globally, a figure expected to rise by 2 billion over the next 30 years. These slums often suffer from unsafe conditions, a lack of essential services, and insecure land tenure. In cities like Karachi and Dar es Salaam, slum dwellers face unique challenges—from crime and inadequate infrastructure to increased exposure to disasters. Effective slum rehabilitation depends on accurate mapping and monitoring. Meenakshi addressed the lack of automated tools by developing a multi-step deep-learning segmentation model that fine-tunes large, generic segmentation models with slum-specific features and uses smaller sub-models to identify roads and vegetation. She further enhanced segmentation through a custom convolutional neural network pipeline, integrating multi-modal inputs and a minimask for improved interpretability of satellite imagery. Designed for accessibility, her model operates efficiently on standard laptops, making it a cost-effective and scalable tool for practitioners and NGOs. By providing clear visual outputs that highlight key features, her method enhances human decision-making and transparency, and is adaptable for use beyond slum mapping.



Neeley Pate | Class of 2028 at the University of Rochester


▶️ Personalizing and Guard-Railing Large Language Models to Increase Belief Accuracy in Social Networks


Political misinformation can mislead individuals, having significant consequences such as delaying climate action or reaching herd immunity through vaccination programs. Such misinformation is often spread socially, particularly on online social platforms. Neely’s project focused on personalizing and guard-railing large language models (LLMs) to combat misinformed beliefs. She designed a tailored LLM system to provide fact-based, trusted information and persuasive messaging to users in online environments. In a study with 1,265 participants, individuals interacted in small online groups, with some having access to the personalized LLM. Participants rated the factuality of statements, saw others’ opinions (including personalized bot responses), and could update their beliefs and choose whom to follow based on these interactions. The project found that access to the LLM led to beliefs that were, on average, closer to the truth and shifted social networks toward factuality. Most participants chose to follow the LLM for guidance, and the system was effective even without users knowing they were interacting with a bot. This work demonstrates that personalized, guard-railed LLMs can help moderate content and reduce misinformation, offering valuable insights for the future design of AI systems and informing policy development.



Maria Beatriz Silva | Class of 2026 at New York University


▶️ GeneVA: A Dataset of Human Annotations for Generative Text to Video Artifacts


Diffusion models have enabled the generation of photorealistic images and, more recently, entire videos from text prompts. However, state-of-the-art video models introduce unique spatio-temporal artifacts that are not found in static images, such as distorted motion or inconsistent object placement across frames. As AI-generated videos become more prevalent, detecting these artifacts is crucial for applications like deepfake detection and content moderation. Yet, no dataset previously existed to capture how humans perceive artifacts in generated videos.


To address this gap, Maria Beatriz created GeneVA, the first large-scale dataset of human-annotated artifacts in AI-generated videos. GeneVA includes 16,451 artifact annotations across 16,356 videos generated from 5,452 prompts, with each video labeled by a human using per-frame bounding boxes, artifact descriptions, and quality ratings. The project introduces a novel taxonomy of spatio-temporal artifacts, provides a large-scale resource for artifact detection and video quality assessment, and demonstrates its utility by training an artifact detector and caption generator. Data was collected using a custom annotation platform, drawing from both open-source and commercial video generation models. Accepted to WACV, this work offers a foundational resource for more human-centered approaches to synthetic video evaluation and improvement.