1900 Broadway, 6th Floor,
New York, NY 10023
Discover how we slashed data center failures by 47% using AI while saving $2.4M annually. Through real case studies, learn how we achieved 96% failure prediction accuracy, cut cooling costs by 23%, and reduced incident response times by 85%.
Join us for a data-driven exploration of AI's impact on enterprise data center operations across multiple facilities. We'll demonstrate how our AI implementations delivered concrete improvements in four key areas: predictive maintenance, resource optimization, energy management, and security. Through real-world case studies, we'll show how our ML models achieved a 47% reduction in equipment failures with 96% prediction accuracy, while our resource optimization algorithms improved utilization by 31% and cut server latency by 38%. We'll detail our cooling efficiency gains that reduced PUE from 1.6 to 1.25, resulting in 23% cost savings. The presentation will cover our security framework's achievements in threat detection (98.5% accuracy) and incident response (85% faster than manual processes), plus our neural network-based capacity planning system that reaches 96% forecasting accuracy. Drawing from implementations across facilities ranging from <1MW to >5MW, we'll share practical strategies for AI adoption in mission-critical infrastructure, including common pitfalls and their solutions. Attendees will leave with actionable insights on implementing AI for data center optimization, supported by our evidence of $2.4M average annual savings in enterprise deployments.
Ashok Jonnalagadda is a Principal/Lead Infrastructure Engineer based in the SF/Bay Area with over 12 years of extensive experience in enterprise technology. Currently serving at Hilmar Cheese Company in Hilmar, CA, Ashok leads critical infrastructure initiatives including data center modernization, disaster recovery implementation, and Azure cloud deployment. Prior to his current role, Ashok held positions at prominent organizations including Northwestern Medicine in Chicago and Dell EMC Professional Services. He holds a Master of Science in Information Assurance from St Cloud State University and a Bachelor of Engineering from Osmania University, India. Throughout his career, Ashok has demonstrated expertise in virtualization, storage, compute, networking, and security concepts. He has successfully spearheaded numerous technology modernization projects, implementing cutting-edge solutions across diverse environments. His technical proficiency spans multiple platforms including Azure, Windows, Linux, and VMware, along with various storage and networking technologies. Ashok is particularly skilled in fostering cross-functional partnerships, bridging gaps between business units and IT teams to enhance system performance and scalability. His approach combines technical excellence with strategic thinking, ensuring alignment between architecture roadmaps and business objectives. As a leader in infrastructure engineering, he continues to drive innovation in cloud technologies, backup solutions, and data center modernization.
The financial sector processes petabytes of data daily, making efficient data warehousing not just advantageous but essential for survival. This session reveals how modern data warehouse architectures are revolutionizing financial reporting, backed by implementation data from global banks managing $5-50 million data warehouse investments. We'll examine how these institutions achieved dramatic improvements: 60% faster report generation, 30% reduction in data reconciliation time, and 45% better regulatory compliance efficiency. Through real-world case studies, we'll dive deep into optimizing five critical architectural layers: data sources, staging, core warehouse, data marts, and presentation. Learn how leading banks are implementing hybrid cloud architectures to cut costs by 35% while maintaining strict compliance with GDPR, Basel III, and Dodd-Frank requirements. We'll explore how modern financial data warehouses deliver sub-second query response times for 90% of analytical workloads, and how AI-driven data quality management has reduced error rates by 75%. The session concludes with practical insights into emerging trends, including blockchain integration for enhanced audit trails, and provides actionable frameworks for organizations at any scale looking to modernize their financial data infrastructure.
Bharath Gaddam is a Senior Database Developer/Engineer with approximately 10 years of experience specializing in Oracle PL/SQL development. He has extensive expertise in designing, developing, and optimizing database applications across multiple platforms including Windows, UNIX, and Linux. Currently, he works at IRIS SOFTWARE, Inc. serving Citi Bank in Tampa, FL as a Sr. Database Engineer since June 2022. His previous roles include positions at ITHUB, Inc. where he worked with major clients like Wells Fargo and BCBSFL, as well as experience at Randstad Technologies and INT Technologies supporting Bank of America and Wells Fargo respectively. Bharath holds multiple degrees: a Bachelor of Engineering in Computer Science from JNTUH, India, and two master's degrees - one in Electrical Engineering from Fremont, CA, and another in Information Technology and Management from Campbellsville, KY. His technical expertise spans across Oracle database technologies (19c/12c/11g/10g), MS-SQL, Sybase, and various development tools. He is particularly skilled in database performance tuning, ETL processes, and implementing complex database solutions. He has experience with modern tools and technologies including Python, Machine Learning, Gen AI, and Hadoop, demonstrating his commitment to staying current with emerging technologies. Throughout his career, Bharath has demonstrated strong analytical and problem-solving abilities, with expertise in conducting requirement analyses, feasibility studies, and implementing complex database solutions for major financial institutions. His experience includes working with various automation tools, version control systems, and testing frameworks, making him a well-rounded database professional.
Explore how AI teammates are revolutionizing SRE work by handling routine investigations, providing context-aware analysis, and enabling teams to focus on engineering. Learn about real implementation challenges and how to prepare your team for this transformation.
This talk explores the emerging paradigm of AI-powered teammates in Site Reliability Engineering, examining how they're reshaping traditional SRE workflows and team dynamics. We'll dive into: ### From Tools to Teammates - Why traditional dashboards and automation aren't enough for modern cloud complexity - Key capabilities of AI teammates: - Autonomous investigation across multiple data sources - Context-aware analysis that understands system relationships - Root cause analysis and remediation recommendations - Accuracy and explainability ### Real-world Examples - How AI teammates handle routine investigations - Collaboration patterns between human and AI SREs - Impact on MTTR and team dynamics - Common implementation challenges and solutions ### Practical Guidance - Preparing teams for AI collaboration - Building trust between human and AI teammates - Measuring success beyond traditional metrics - Future implications for SRE career development ### Key Takeaways Attendees will leave understanding: - How to evaluate AI teammate capabilities - Best practices for implementation - Ways to maximize team effectiveness - Strategies for addressing common concerns This talk is ideal for SRE managers, team leads, and practitioners interested in understanding how AI teammates will transform their daily operations and long-term career trajectories. The presentation will include real examples while maintaining vendor neutrality, focusing on principles and practices that apply across different AI implementations.
François Martel is a Field CTO at Neubird.ai, where he helps enterprise SRE teams transform their operations through Generative AI. With over two decades of experience in cloud architecture and AI/ML, François specializes in addressing fundamental challenges in modern IT operations through Hawkeye, a GenAI-powered SRE solution that automates routine investigations and enables engineering teams to focus on strategic innovation.
Previously at Amazon, François led Analytics and AI/ML initiatives for Financial Services, developing comprehensive GenAI programs and driving enterprise-scale transformations. He brings deep technical expertise in cloud-native architecture, machine learning, and distributed systems, holding certifications in AWS Solution Architecture, Analytics, ML, and Generative AI, along with Kubernetes administration.
Connect with François to discuss the future of SRE, GenAI's impact on IT operations, and building resilient engineering teams.
Accelerate your CI workflows with vCluster—create lightweight, on-demand Kubernetes clusters for faster testing and development, reducing build times and overhead while supporting CRDs for production-like environments.
Continuous Integration (CI) workflows often encounter bottlenecks when testing Kubernetes applications. Traditional methods of building and tearing down full clusters, whether locally or in the cloud, can be both time-consuming and resource-intensive. vCluster, an open-source virtual cluster solution, offers a game-changing alternative by enabling lightweight Kubernetes clusters to be created on demand and quickly This talk will guide you through integrating vCluster into your CI pipelines and the benefits you get from it. Virtual Cluster enables rapid application testing in production-like environments, including support for custom resource definitions (CRDs) without the overhead of traditional cluster setups. By the end of this session, you'll be equipped to reduce build times, accelerate testing cycles, and enhance the overall developer experience, as your clusters will take less time to build and you will have more time to test.
Hrittik is currently Platform Advocate at Loft Labs and a CNCF Ambassador, who has previously worked at various startups helping them scale their content efforts. He loves diving deep into distributed systems and creating articles on them and has spoken at conferences such as Open Source Summit Vienna, SREDays London, Azure Cloud Summit, UbuCon Asia and Kubernetes Community Days - Lagos and Chennai among others! His best days are when he finds ways to create impact in the communities he’s a part of either by code, content, or mentorship!
There was a time where discussions were occurring on Wasm being the next thing and "taking down" Kubernetes. Then, everyone quickly realized that Wasm is a runtime, and because of that, it needs a place to run. There are a ton of places where Wasm can run - locally, in serverless functions, on a VM, and even in Docker. One of the best places for Wasm to run is with the world's largest orchestrator, Kubernetes. In this session, you'll learn about: * How Wasm and Kubernetes work together. * How to create a Wasm binary via a container image. * How to run the Wasm binary in Kubernetes. * What runtime availability a k8s cluster needs to ensure that Wasm runs properly.
Michael Levan is a Distinguished Engineer in the Kubernetes and Security space who spends his time working with startups and enterprises around the globe on Kubernetes consulting, training, and content creation. He is a trainer, 4x published author, podcast host, international public speaker, CNCF Ambassador, and was part of the Kubernetes v1.28 Release Team. Want to see what he is up to? https://www.michaellevan.net/
Join for an interesting talk on 'Secret Management'. We'll dive into the challenges of protecting sensitive data in dynamic, distributed environments. Learn about tools that offer secure, scalable, and reliable solutions. Don't miss this opportunity to enhance your DevOps practices!
Secret management is a crucial aspect of DevOps, as it protects sensitive data used by applications and services. Secrets can include API keys, credentials, tokens, certificates, and passwords that grant access to various resources and systems. If these secrets are compromised, attackers can exploit them to cause damage, steal information, or disrupt operations. The challenges of secret management is how to securely store, distribute, and rotate secrets in a dynamic and distributed environment. Traditional methods of hard-coding secrets in configuration files or environment variables are not secure, scalable, or reliable. Moreover, secrets need to be updated frequently to comply with security policies and regulations to prevent unauthorized access. To address these challenges, several tools and frameworks have been developed to provide secret management solutions for DevOps.These tools can help DevOps teams to implement best practices for secret management.
Michel started his career as a medical officer in the Royal Dutch Airforce, with a focus on pharma. After the air force, he continued in pharma, followed by time working in clinical pharmacology. While there, he transitioned to IT by learning UNIX and MUMPS, and developed a system for managing patients’ medical records. As his career developed, his responsibility shifted from a deep technical perspective to a more visionary role. At the end of 2011, Michel authored a book on WebLogic Administration for beginners. He joined Qualogy in April 2012 where he expanded his repertoire significantly, serving a wide range of customers with his knowledge about Java Application Servers, Middleware and Application Integration. He also increased his multiple-industry knowledge in his role as Solutions or IT architect by working for customers in a range of sectors, including financials, telecom, public transportation and government organizations. In 2012, he received the IT Industry-recognized title of Oracle ACE for bein
How should AI based automated test generation make us rethink the software testing pyramid ? We compare two automated test generation approaches (LLM vs SBST), discuss open source tools & industry use cases, and provide developers with recommendations for integrating AI into their testing workflows.
Cohn’s Test Pyramid has been a foundational aspect of agile development, guiding developers in the delicate balance between unit, integration, and e2e testing. The recent surge in the adoption of AI based code generation necessitates a need to rethink the canonical form of the testing pyramid as test driven development takes on a “shift-left” approach, pushing complex integration testing to earlier in the workflow. In this talk, we start by discussing the two main automated test generation approaches in use today (LLM-based vs SBST-based) and tradeoffs between them, and then we dive into some open source generation software and current industry use-cases using AI based test generation. Finally, we address some common myths surrounding future speculation about advancements in this area, and close with informed recommendations for developers on when to consider AI based solutions for your testing workflows and which approach to use. Key takeaways for developers: - A practical understanding of the tradeoffs between LLM and SBST-based test generation approaches. - Available test generation tools that can be used for your use-cases. - Recommendations for when and how to integrate automated test generation in your workflow
Hi, I’m Surbhi, currently working as a Senior Software Engineer at Google in NYC. My background includes 7 years of experience working in Android native client development, platform engineering, performance optimization, backend architectures, and server-driven UIs all of which have involved test-driven development. My work has primarily been in Java and C++. I graduated from Brown University with a degree in Computer Science where I contributed to the department’s undergraduate teaching in data structures algorithms, focusing on making courses inclusive and collaborative and introducing unit testing and the testing pyramid paradigm to the course curriculum, which has since become a mainstay fixture. Since then, I have at Google, contributed extensively towards intern hiring, mentoring early career professionals, and helping foster inclusive team cultures. I enjoy following pro tennis, running, biking, cooking, board games, walking, eating good food, and exploring the city. Reach me at surbhimadan1995@gmail.com .
Join us for an exclusive panel featuring top DevOps and SRE engineers who are at the cutting edge of blending hardware and software. They’ll dive deep into the unique challenges of managing infrastructure, optimizing performance, and ensuring flawless integration between the physical and digital realms. Expect insider tips, innovative solutions, and a firsthand look at what it takes to excel in this dynamic field—perfect for anyone looking to elevate their DevOps and SRE game!
Currently, Ale is the Director of Engineering at Viam. With a strong background in tech leadership, she has driven impactful projects at companies like Stripe and Code Climate, advancing product scalability, reliability, and team performance. As the co-founder of Latinas in Tech NYC, Ale advocates for diversity and inclusion in the tech industry. She is passionate about fostering a collaborative environment where everyone can grow their skills, deliver impactful solutions, and build resilient, scalable systems.