Hello <<Salutation>> Reader,  

We look forward to presenting the 2015 ISC Cloud & Big Data conference attendees with qualified and unbiased insights into the latest cloud and big data trends, innovations, services, and strategies aiming at enterprises and academia (with a focus on compute- and data-intensive applications) from September 28 - 30.
 
If you haven’t registered, we encourage you to visit the event website for the full program.
 
Here is why you shouldn’t miss out on this conference.
 
Workshops: Six proposals have been accepted so far. They will touch on a range of cloud and big data topics, like Docker containers, cloud solutions for manufacturing, and deep learning with GPUS. All workshops will be delivered by top-notch instructors from academia and industry.
 
Keynotes: Two fascinating keynotes will open and close the conference, respectively. On September 29, Jan Vitt of DZ BANK will describe cloud computing in a German bank, by asking, “Is there still a long way to go?” On the afternoon of September 30, in the closing talk, Prof. Peter Coveney will dissect the virtual human, describing in silico methods for personalized medicine. We are convinced that both keynotes will be first class!
 
Sessions: In between the keynotes, we have scheduled 10 sessions, encompassing about 30 speakers. The sessions are designed to ensure that attendees get the most relevant information about cloud and big data in the context of business deployments and technology advances. We are grateful to the steering committee for all the help in finding the right speakers and we are sure that these talks will be beneficial to the attendees.
 
Vendor panel: Of great interest is a featured panel with leading companies providing products and services for cloud and big data users. Company representatives will discuss challenges, opportunities, and customer benefits based on their offerings in these areas. 
 
Lastly, Amazon Web Services, HGST, Intel, Fujitsu, ClusterVision, DDN and NVIDIA are committed to sponsoring this event. We’d like to thank them for their support.
 
We look forward to your attendance.
 
Best regards,
  
Wolfgang Gentzsch and Sverre Jarp
Co-Chairs
ISC Cloud & Big Data
Cloud and Big Data in the Media

Founded by former CERN researcher Prof. Michael Feindt, Blue Yonder uses techniques such as predictive modeling and machine learning to help businesses improve profits. Recently HPC Today had the chance to talk to him about how his company’s cloud platform helps with sales planning and various aspects of business analysis. Read the interview at http://www.hpctoday.com/verbatim/beyond-deep-learning-and-neural-networks/. On the first day of the conference, the same Prof. Feindt will discuss what it takes to be a data scientist, and on Tuesday, he will be delivering a talk on data-driven automation in the enterprise.

Container technology is increasingly seen as a key enabler for HPC cloud and Big Data applications. At the conference on Monday, Christian Kniep will deliver a workshop on Docker, currently the most well-known container framework in the industry. Kniep, an HPC Software Architect at QNIB Solutions, will present how Docker works, how it’s being used, and its impact on HPC and Big Data infrastructure. EnterpriseTech recently talked with Kniep about some of these topics and gave us a preview of the upcoming workshop. Read the interview at http://www.enterprisetech.com/2015/08/20/qnibs-kniep-docker-disruption-containers-in-hpc-and-isc/.

For fans of deep learning, a can’t-miss workshop on Monday is Using GPUs for Deep Learning, presented by a trio of engineers from NVIDIA. GPUs have become a popular processing platform for running deep learning applications such as image recognition and natural language processing. The GPU‘s ability to handle a lot of data in parallel has propelled the architecture to the forefront of much cutting-edge research in these areas. To get of sense of the technology that will be presented at the workshop, read one of the most recent articles on the subject over at Datanami: http://www.datanami.com/2015/08/20/how-nvidia-is-unlocking-the-potential-of-gpu-powered-deep-learning/.
Sponsored Content
 
Active Archive is HGST’s newest object storage system providing 4.7 petabytes of accessible, scalable, simple and affordable raw data storage in a single rack. Active Archive helps data centers evolve from siloed data storage to cloud-scale active archiving. Its breakthrough TCO enables organizations to store and access more data, and ultimately to unlock its true value. Find out more here.
Register before September 16!

If you register by Wednesday, September 16, you will be able to save 100 Euros off the conference pass. For registration details, please click here

Facebook
Facebook
Twitter
Twitter
Website
Website
Copyright © 2015 Prometeus GmbH, All rights reserved.

unsubscribe from this list    update subscription preferences