--- title: README emoji: 👀 colorFrom: purple colorTo: purple sdk: static pinned: false --- # ABOUT AIME The company AIME GmbH is based in Berlin and sees its core competence within the development, production and Europe-wide sales of highly specialized servers and workstations for the development of artificial intelligence. With technology, innovative spirit and know-how along the entire AI value chain, AIME enables reliable and future-proof production of machine learning and deep learning models. With our AIME GPU CLOUD, we rent cheap multi-GPU HPC servers. AIME would like to be the breeding ground and engine of a cost-effective, start-up-related innovative development of the #AI landscape within Europe. We are already offering a more modern technology stack than the competition at a third of the price of the usual market participants in the cloud computing sector, thus enabling an affordable platform at the highest technical level, both for small startups as well as SMEs and corporations. AIME wants to drive the European AI development forward and help ensure that Europe is not left behind in the global race, but is perceived as an independent, innovative force.AIME is a German company and would like to strengthen the European position. # AIME MLC The AIME machine learning container management system (MLC) is an easily installed, run and managed Docker container framework for most common deep learning frameworks like Tensorflow and Pytorch. The core features are: - Setup and run a specific version of Tensorflow, Pytorch or Mxnet with one simple command - Run different versions of machine learning frameworks and required libraries in parallel - manages required libraries (CUDA, CUDNN, CUBLAS, etc.) in containers, without compromising the host installation - Clear separation of user code and framework installation, test your code with a different framework version in minutes - multi session: open and run many shell session on a single container simultaneously - multi user: separate container space for each user - multi GPU: allocate GPUs per user, container or session - Runs with the same performance as a bare metal installation - Repository of all major deep learning framework versions as containers AIME MLC is a comprehensive software stack that enables developers to easily setup AI projects and navigate between projects and frameworks with just two lines of code: ``` mlc-create my-container Tensorflow 2.1.0 mlc-open my-container ``` After that your deep learning framework is ready to be used. AIME CLOUD instances and AIME servers and workstations ship with the AIME ML Container Manager preinstalled. The necessary libraries and GPU drivers for each Deep Learning framework are bundled in preconfigured Docker containers and can be initiated with just a single command. The most common frameworks such as TensorFlow and PyTorch are pre installed and ready to use. The AIME ML Container Manager makes life easier for developers so they do not have to worry about framework version installation issues. Find more in our github-Repo: https://github.com/aime-team/aime-ml-containers/tree/master or our blog article: https://www.aime.info/blog/en/deep-learning-framework-container-management/ # AIME CLOUD The AIME CLOUD is optimized for Machine Learning, Deep Learning and Big Data Analytics. The multi-GPU server instances of the AIME cloud contain powerful NVIDIA GPUs like the - NVIDIA RTX A5000 - NVIDIA RTX 6000 - NVIDIA RTX 6000 Ada - NVIDIA A100 40/80GB - NVIDIA H100 - AMD MI100 - AMD MI300 compute accelerators. They meet the high requirements of massive parallel HPC computing applications like deep learning and rendering. With the advantages of on-demand resources they can be terminated on a weekly or monthly basis, making them ideal for deep learning training tasks or project based compute assignments. But also as your dedicated bare metal server with a yearly renting term, the AIME GPU cloud impresses as maintenance free solution with a favorable TCO. The machines are running secured behind a firewall but have full high speed connectivity to the internet. They are accesible through a secure shell gateway and are configured to be used as multi user remote Ubuntu desktop machine with a fully by the ssh protocol encrypted and secured connection. You can take full advantage of thin clients working on a notebook anywhere you like and leave the 24/7 high speed number crunching to your AIME remote server, having access to bare metal hardware peformance: full CPU, multi-GPU, SSD bandwith with no loss of performance due to virtualisation and no sharing the hardware with bad behaving neighbours that could stall the machine with their tasks. When needed more CPU cores, more RAM, faster storage or any hardware upgrade to more powerfull GPUs are available on request. Just tell what you need system upgrades are done. Aslo multiple instances can be offered, connected in a dedicated 10 GBit/s or 100 Gbit/s VLAN or directly via infiniband. Also a seamless migration to a new machine is no problem. The instances start preinstalled with Linux OS and are configured with the latest multi GPU drivers and libraries. With the pre-installed AIME ML Container Manager, you can easily set up AI projects and navigate between frameworks and projects. # AIME HARDWARE AIME servers and workstations are built for Deep Learning & High Performance Computing. You can save up to 90% by switching from your current cloud provider to AIME products. Our multi-GPU accelerated HPC computers come with preinstalled frameworks like Tensorflow, Keras, PyTorch and more. Start computing right away! Throughout Europe, researchers and engineers at universities, in start-ups, large companies, public agencies and national laboratories use AIME products for their work on the development of artificial intelligence. AIME machines are designed and built to perform on Deep Learning applications. Deep learning applications require fast memory, high interconnectivity and lots of processing power. Our multi-GPU design reaches the hightest currently possible throughput within this form factor. All of our components have been selected for their energy efficiency, durability and high performance. They are perfectly balanced, so there are no performance bottlenecks. We optimize our hardware in terms of cost per performance, without compromising endurance and reliability. Our hardware was first designed for our own Deep Learning application needs and evolved in years of experience in Deep Learning frameworks and customized PC hardware building. ### Iterate Faster To wait unproductively for a result is frustrating. The maximum waiting time acceptable is to have the machine work overnight so you can check the results the next morning and keep on working. ### Extend Model Complexity In case you have to limit your models because of processing time, you for sure don't have enough processing time. Unleash the extra possible accuracy to see where the real limits are. Train with more data, learn faster what works and what does not with the ability to make full iterations, every time. ### Explore Without Regrets Errors happen as part of the develoment process. They are necessary to learn and refine. It is annoying when every mistake is measurable as a certain amount of money, lost to external service plans. Make yourself free from running against the cost counter and run your own machine without loosing performance! ### Protect Your Data Are you working with sensitive data or data that only is allowed to be processed inside your company? Protect you data by not needing to upload it to cloud service providers, instead process them on your proprietary hardware. ### Start Out Of The Box Our machines come with preinstalled Linux OS configured with latest drivers and frameworks like Tensorflow, Keras, PyTorch and MXNet. Just login and start right away with your favourite Deep Learning framework. ### Save Money Cloud services costs can quickly grow to hundreds of thousands of Euro/Dollars per year just for a single instance. Our hardware is available for a fraction of this cost and offers the same performance as Cloud Services. The TCO is very competitive and can save you service costs in the thousands of Euro every month. If you prefer not buying your own hardware check out our competitive hosted bare-metal server rental service. Check: https://www.aime.info