Neural Architecture Search

Hand-crafting deep architectures struggles to keep pace with the proliferation of application domains and hardware constraints. Every setting—from cloud clusters to milliwatt edge boards—demands a distinct compromise among accuracy, latency, energy budget, and memory footprint, compromises that no longer align along a single optimisation axis. The outcome is a prohibitive investment in development cycles and experimentation. 

Research in neural architecture search seeks to automate this design process, replacing exhaustive exploration with workflows steered by performance predictors, zero-back-prop cost proxies, and transferable super-networks. Exploration no longer chases peak accuracy alone: it charts Pareto surfaces that integrate computational efficiency, sustainability, and domain compatibility, whether for 3-D medical volumes or spectro-temporal sequences. Within this framework, context-aware search spaces and the capacity to repurpose pre-trained architectures converge on pipelines that compress days of manual tuning into hours, yielding models tailored precisely to their real operational environment. 

Contact:

Eugenio Lomurno

Back to: Deep Learning