Leverages Just-Released AWS Inf2 Accelerator Technology for Production Workloads of Deep Learning NLP Models
Finch Computing, developers of innovative natural language processing technology used throughout the Federal government and among financial firms, content aggregators and others in the private sector, today announced at the AWS re:Invent Conference that it is using the just-released AWS Inf2 instances to power production workloads of its deep learning NLP models.
Finch Computing creates software that reads and understands text like a human. Its product portfolio includes the real-time NLP solution Finch for Text®, the dashboard solution Finch Analyst®, and a suite of data-as-a-service products under the FinchDaaS® umbrella. The company is a pioneer in state-of-the-art NLP and entity-driven intelligence capabilities such as text summarization and entity relationship discovery.
“Each of these capabilities requires using large deep learning models, and performing them at scale on real-time, global data feeds requires a massive amount of computing power,” Finch Computing President Scott Lightner said.
“Using traditional CPU and GPU computing for these tasks quickly becomes cost-prohibitive,” Finch Computing Chief Architect Franz Weckesser added. “With Inf1 instances on production NLP workloads, we were able to achieve 80% cost savings over GPUs. AWS’s new Inf2 platform provides greater performance and allows us to move more Deep Learning models to the platform faster. We already have entity relationship extraction running in Inf2, and entity coreference resolution and text summarization are next.”
To learn more about Finch Computing’s solutions and how it deploys the latest in NLP and machine learning alongside the latest accelerator technologies, including to sign up for a free trial of its Finch for Text® product, please visit www.finchcomputing.com.
Visit AITechPark for cutting-edge Tech Trends around AI, ML, Cybersecurity, along with AITech News, and timely updates from industry professionals!