Guest post by William Tambellini of RWS Language Weaver.
This post shows how RWS Language Weaver, a comprehensive and adaptable neural machine translation platform, uses ArrayFire to run AI algorithms at scale. Language Weaver provides secure enterprise machine translation solutions adapted to client content – empowering you to communicate without language barriers.
Language is often a barrier to clear communication with internal and external stakeholders. For governments, Language Weaver brings a global perspective into an analytics pipeline integrating with content intelligence applications to minimize the effort required to translate multilingual content. For global enterprises, Language Weaver can help you improve collaboration between teams, increase productivity, and go to market faster internationally. For legal and compliance teams, Language Weaver manages multilingual data for cross-border litigation and regulatory compliance.
Language Weaver’s proprietary homemade AI library runs on the CPU or the CPU+GPU. Language Weaver uses the ArrayFire library for the low-level tensor algebra needed to infer or train the ANN (Artificial Neural Network) layers on the GPU or CPU.
Our software stack consequently takes advantage of the best open-source libraries, including ArrayFire.
Running efficiently on NVIDIA GPUs requires running the best CUDA kernels at the right time using any available inputs for the required neural network architecture. ArrayFire provides just-in-time compilation to adapt to our multiple different ANN architectures with unique and dynamic tensor shapes.
The typical ANN architecture used in deep learning today is called “transformer.“
Language Weaver achieves state-of-the-art speed for running ANNs on NVIDIA GPUs. ArrayFire enables multi-generation GPU support with only one library. Language Weaver can run today on four generations of NVIDIA GPUs (Pascal, Volta, Turing, Ampere) with only one library, afcuda.so/dll.
In the future, we will assist the ArrayFire team to extend support for high-speed CPU, NVIDIA Hopper GPUs, Intel GPUs, and cross-device arrays, where ArrayFire computation accepts computations using tensors from different devices.