This is historical material “frozen in time”. The website is no longer updated and links to external websites and some internal pages may not work.

Search form

Incentive prizes deliver important results for the Nation, offer more “bang for the buck.”

Summary: 
How IARPA (Intelligence Advanced Research Projects Activity) uses incentive prizes to solve hard, important problems at a fraction of the cost of traditional contracts.

Last week, President Obama signed the American Innovation and Competitiveness Act (S. 3084) which updates and strengthens the incentive prize authority granted to all agencies in the America COMPETES Reauthorization Act of 2010.  This builds on more than six years of experience Federal agencies have had conducting prizes and challenges under the America COMPETES Act.  I caught up with IARPA Director Dr. Jason Matheny to discuss how they have used incentive prizes to drive innovation, and what IARPA’s vision is for the future of incentive prizes.

Tom Kalil: What prizes has IARPA offered and what were the results?

IARPA Director Dr. Jason Matheny: In 2014, IARPA launched its first prize challenge, INSTINCT, which sought algorithms to predict whether a person would keep a promise, based on neural, physiological, and behavioral data. The challenge drew on data provided by volunteers in IARPA's TRUST program, and improved accuracy by 15%.

In 2015, IARPA launched the ASPIRE challenge to improve speech recognition in noisy environments and with limited training data. Winners reduced the error rate by more than 50% compared with IARPA’s baseline system.

In 2016, we launched the Multi-View Stereo 3D Mapping challenge. Winners delivered automated systems that could accurately render 3D models from satellite images.

In 2017, we'll launch at least two new challenges: the Nail to Nail challenge, which will offer prizes to improve automated collection and recognition of fingerprints; and Functional Map of the World, which will offer prizes for accurately inferring building functions from overhead imagery.

We've encouraged our managers to more frequently use prize challenges alongside traditional research grants and contracts, so that we can broaden our research and engage non-traditional researchers.

TK: What have you gained from incentive prizes that you couldn’t have gotten from traditional grants and contracts?

JM: IARPA’s prizes complement our large-scale programs, by helping to assess the state of the art, set ambitious goals, and provide opportunities for non-traditional researchers.

INSTINCT took place at the end of a large-scale program, and helped us to cost-effectively analyze the program's data. The data analysis in the prize was conducted for less than 1/10th the cost of a traditional research contract.

The 3D Mapping challenge, in contrast, took place at the start of a multi-year program (CORE3D), to help us assess the current state of the art and set ambitious research goals.

Challenges have also helped us gather solutions from sources that might not respond to more traditional Federal solicitations. IARPA has collaborated with NIST by providing datasets for machine learning challenges such as the 2015 Language Recognition i-Vector Machine Learning Challenge, in which small teams often beat traditional government contractors.

TK: What advice do you have for agencies that are considering increasing their use of incentive prizes?

JM: While first-time challenge runners are eager to begin gathering solutions, preparation is the iceberg of the process. Running your challenge concept by experienced colleagues, ensuring your prize purse matches the required effort, developing a marketing plan to communicate with potential solvers, and clearly articulating the problem statement, are all vital to success. The development process from idea to announcement usually takes more than six months.

Prizes are complementary to traditional acquisition methods, and seem well-suited to reach new participants, address difficult problems where the best approaches are unclear, and gather a range of ideas quickly from diverse sources. IARPA’s challenges have answered research questions that couldn’t be addressed by a single contractor, and provided valuable input to shape our programs.

But prizes are unlikely to work for every problem.  We've used them for problems in data analysis, where the barriers to entry are low, required resources (a computer and an internet connection) are widely distributed, and solutions come down to the expertise and ingenuity of solvers. It's less likely that we'd use prizes for problems that require expensive laboratory equipment, or where solutions depend on the development of costly hardware. On the other hand, the U.S. Department of Energy has successfully used its Wave Energy Prize for new hardware development.

TK: What other opportunities are you exploring to harness the scientific method to improve the way IARPA and other federal agencies support research?

JM: Prizes provide one way to quickly test the value of a line of research, or expand on one aspect of a research question. Prizes allow many methods to be tested and compared in parallel—sometimes for less than the cost of hiring a traditional contractor to test a single method. This makes prizes particularly valuable when the research question is well-defined and focused, but there is no clear evidence supporting any single potential path to a solution.

With more data we could also learn something about the cost-effectiveness of prizes compared to traditional grants and contracts. Most federally funded research is selected by a small number of experts, who deliberate on the quality of research proposals. Research in a range of disciplines casts doubt on the accuracy of deliberating groups of experts – they're prone to a variety of social biases and have a tendency to stifle or discount minority views. In research that IARPA funded on expert judgment, mechanisms rewarding accurate dissent, such as probability surveys and prediction markets, were more accurate than deliberating groups. It would be useful to run a set of randomized experiments, in which we test the accuracy and cost-effectiveness of a variety of mechanisms for selecting and funding research, including traditional expert panels, prizes, prediction markets, surveys, and others.

 

Tom Kalil is Deputy Director for Technology and Innovation at the White House Office of Science and Technology Policy.