1900 Broadway, 6th Floor,
New York, NY 10023
Coming soon...
Phil Christianson is the Chief Product Officer at Xurrent, where he drives the strategic direction and execution of the platform roadmap, focusing on metrics-driven leadership to secure a winning market position. With extensive experience in product management, he previously led large teams and multimillion-dollar R&D initiatives at Wayfair, overseeing pricing systems and competitive intelligence. A University of Iowa graduate with a BBA in Management of Information Systems, Phil brings over two decades of expertise in building high-performing teams and delivering innovative solutions across industries.
At Cast AI, we’ve developed Container Live Migration to automatically consolidate these workloads, ensuring continuous uptime, reducing resource fragmentation, and cutting costs. Join us to see how we’re making Kubernetes work for stateful applications in a practical demo.
As Field CTO, Phil is responsible for working with customers to educate and encourage kubernetes best practices that lead to optimal cloud costs. With more than 15 years of experience in a wide range of positions he is able to balance resiliency, performance and cost to help customers achieve their goals.
Previously, Phil was a Director of Engineering for Security Products at Oracle cloud. This experience helped shape his understanding of cloud scale technology and best practices.
Discover how we slashed data center failures by 47% using AI while saving $2.4M annually. Through real case studies, learn how we achieved 96% failure prediction accuracy, cut cooling costs by 23%, and reduced incident response times by 85%.
Join us for a data-driven exploration of AI's impact on enterprise data center operations across multiple facilities. We'll demonstrate how our AI implementations delivered concrete improvements in four key areas: predictive maintenance, resource optimization, energy management, and security. Through real-world case studies, we'll show how our ML models achieved a 47% reduction in equipment failures with 96% prediction accuracy, while our resource optimization algorithms improved utilization by 31% and cut server latency by 38%. We'll detail our cooling efficiency gains that reduced PUE from 1.6 to 1.25, resulting in 23% cost savings. The presentation will cover our security framework's achievements in threat detection (98.5% accuracy) and incident response (85% faster than manual processes), plus our neural network-based capacity planning system that reaches 96% forecasting accuracy. Drawing from implementations across facilities ranging from <1MW to >5MW, we'll share practical strategies for AI adoption in mission-critical infrastructure, including common pitfalls and their solutions. Attendees will leave with actionable insights on implementing AI for data center optimization, supported by our evidence of $2.4M average annual savings in enterprise deployments.
Ashok Jonnalagadda is a Principal/Lead Infrastructure Engineer based in the SF/Bay Area with over 12 years of extensive experience in enterprise technology. Currently serving at Hilmar Cheese Company in Hilmar, CA, Ashok leads critical infrastructure initiatives including data center modernization, disaster recovery implementation, and Azure cloud deployment. Prior to his current role, Ashok held positions at prominent organizations including Northwestern Medicine in Chicago and Dell EMC Professional Services. He holds a Master of Science in Information Assurance from St Cloud State University and a Bachelor of Engineering from Osmania University, India. Throughout his career, Ashok has demonstrated expertise in virtualization, storage, compute, networking, and security concepts. He has successfully spearheaded numerous technology modernization projects, implementing cutting-edge solutions across diverse environments. His technical proficiency spans multiple platforms including Azure, Windows, Linux, and VMware, along with various storage and networking technologies. Ashok is particularly skilled in fostering cross-functional partnerships, bridging gaps between business units and IT teams to enhance system performance and scalability. His approach combines technical excellence with strategic thinking, ensuring alignment between architecture roadmaps and business objectives. As a leader in infrastructure engineering, he continues to drive innovation in cloud technologies, backup solutions, and data center modernization.
Accelerate your CI workflows with vCluster—create lightweight, on-demand Kubernetes clusters for faster testing and development, reducing build times and overhead while supporting CRDs for production-like environments.
Continuous Integration (CI) workflows often encounter bottlenecks when testing Kubernetes applications. Traditional methods of building and tearing down full clusters, whether locally or in the cloud, can be both time-consuming and resource-intensive. vCluster, an open-source virtual cluster solution, offers a game-changing alternative by enabling lightweight Kubernetes clusters to be created on demand and quickly This talk will guide you through integrating vCluster into your CI pipelines and the benefits you get from it. Virtual Cluster enables rapid application testing in production-like environments, including support for custom resource definitions (CRDs) without the overhead of traditional cluster setups. By the end of this session, you'll be equipped to reduce build times, accelerate testing cycles, and enhance the overall developer experience, as your clusters will take less time to build and you will have more time to test.
Hrittik is currently Platform Advocate at Loft Labs and a CNCF Ambassador, who has previously worked at various startups helping them scale their content efforts. He loves diving deep into distributed systems and creating articles on them and has spoken at conferences such as Open Source Summit Vienna, SREDays London, Azure Cloud Summit, UbuCon Asia and Kubernetes Community Days - Lagos and Chennai among others! His best days are when he finds ways to create impact in the communities he’s a part of either by code, content, or mentorship!
There was a time where discussions were occurring on Wasm being the next thing and "taking down" Kubernetes. Then, everyone quickly realized that Wasm is a runtime, and because of that, it needs a place to run. There are a ton of places where Wasm can run - locally, in serverless functions, on a VM, and even in Docker. One of the best places for Wasm to run is with the world's largest orchestrator, Kubernetes. In this session, you'll learn about: * How Wasm and Kubernetes work together. * How to create a Wasm binary via a container image. * How to run the Wasm binary in Kubernetes. * What runtime availability a k8s cluster needs to ensure that Wasm runs properly.
Michael Levan is a Distinguished Engineer in the Kubernetes and Security space who spends his time working with startups and enterprises around the globe on Kubernetes consulting, training, and content creation. He is a trainer, 4x published author, podcast host, international public speaker, CNCF Ambassador, and was part of the Kubernetes v1.28 Release Team. Want to see what he is up to? https://www.michaellevan.net/
Join for an interesting talk on 'Secret Management'. We'll dive into the challenges of protecting sensitive data in dynamic, distributed environments. Learn about tools that offer secure, scalable, and reliable solutions. Don't miss this opportunity to enhance your DevOps practices!
Secret management is a crucial aspect of DevOps, as it protects sensitive data used by applications and services. Secrets can include API keys, credentials, tokens, certificates, and passwords that grant access to various resources and systems. If these secrets are compromised, attackers can exploit them to cause damage, steal information, or disrupt operations. The challenges of secret management is how to securely store, distribute, and rotate secrets in a dynamic and distributed environment. Traditional methods of hard-coding secrets in configuration files or environment variables are not secure, scalable, or reliable. Moreover, secrets need to be updated frequently to comply with security policies and regulations to prevent unauthorized access. To address these challenges, several tools and frameworks have been developed to provide secret management solutions for DevOps.These tools can help DevOps teams to implement best practices for secret management.
Michel started his career as a medical officer in the Royal Dutch Airforce, with a focus on pharma. After the air force, he continued in pharma, followed by time working in clinical pharmacology. While there, he transitioned to IT by learning UNIX and MUMPS, and developed a system for managing patients’ medical records. As his career developed, his responsibility shifted from a deep technical perspective to a more visionary role. At the end of 2011, Michel authored a book on WebLogic Administration for beginners. He joined Qualogy in April 2012 where he expanded his repertoire significantly, serving a wide range of customers with his knowledge about Java Application Servers, Middleware and Application Integration. He also increased his multiple-industry knowledge in his role as Solutions or IT architect by working for customers in a range of sectors, including financials, telecom, public transportation and government organizations. In 2012, he received the IT Industry-recognized title of Oracle ACE for bein
How should AI based automated test generation make us rethink the software testing pyramid ? We compare two automated test generation approaches (LLM vs SBST), discuss open source tools & industry use cases, and provide developers with recommendations for integrating AI into their testing workflows.
Cohn’s Test Pyramid has been a foundational aspect of agile development, guiding developers in the delicate balance between unit, integration, and e2e testing. The recent surge in the adoption of AI based code generation necessitates a need to rethink the canonical form of the testing pyramid as test driven development takes on a “shift-left” approach, pushing complex integration testing to earlier in the workflow. In this talk, we start by discussing the two main automated test generation approaches in use today (LLM-based vs SBST-based) and tradeoffs between them, and then we dive into some open source generation software and current industry use-cases using AI based test generation. Finally, we address some common myths surrounding future speculation about advancements in this area, and close with informed recommendations for developers on when to consider AI based solutions for your testing workflows and which approach to use. Key takeaways for developers: - A practical understanding of the tradeoffs between LLM and SBST-based test generation approaches. - Available test generation tools that can be used for your use-cases. - Recommendations for when and how to integrate automated test generation in your workflow
Hi, I’m Surbhi, currently working as a Senior Software Engineer at Google in NYC. My background includes 7 years of experience working in Android native client development, platform engineering, performance optimization, backend architectures, and server-driven UIs all of which have involved test-driven development. My work has primarily been in Java and C++. I graduated from Brown University with a degree in Computer Science where I contributed to the department’s undergraduate teaching in data structures algorithms, focusing on making courses inclusive and collaborative and introducing unit testing and the testing pyramid paradigm to the course curriculum, which has since become a mainstay fixture. Since then, I have at Google, contributed extensively towards intern hiring, mentoring early career professionals, and helping foster inclusive team cultures. I enjoy following pro tennis, running, biking, cooking, board games, walking, eating good food, and exploring the city. Reach me at surbhimadan1995@gmail.com .
SLOs allow teams to prioritize the more impactful or important aspects of their services. Metrics that center the user experience gives teams focus and goals. Highlighting the metrics that matter most gives teams space to disable alerts that contribute nothing to overall customer happiness.
Automation gives us more hands to deal with more issues without taking time away from more interesting work. Machine Learning tools group alerts together and help add context when things are noisy and distracting.
These tools combined help teams tackle a common incident response problem - alert fatigue. Full Service Ownership teams looking to improve the quality of their services can experiment with their SLIs and SLOs to find the work that will benefit their systems the most while also preserving their own sanity. Prevent the stress and anxiety that can arise from unexpected system failures by setting clear expectations and allowing for planned responses to potential issues based on pre-agreed norms.
Race cars are built for speed, but a challenging track can hold them back. Is the network slowing down your AI/ML jobs? Join me to learn how to boost speed and resilience by tackling link flapping and network congestion. Turbocharge your AI/ML workloads, just like a race car at top speed.
Lerna is a Senior Solutions Engineer at Clockwork Systems where she helps customers meet their performance goals with software solutions built on Clockwork.io’s foundational research. Prior to this, she was a Senior Solutions Architect serving Global Financial Services customers at AWS for 3 years. Before that, Lerna spent 17 years as an infrastructure engineer in large financial services companies working on authentication systems, distributed caching, and multi region deployments using IaC and CI/CD to name a few. In her spare time, she enjoys hiking, sightseeing and backyard astronomy.
Fancy a peek into how AWS FIS bring Chaos Engineering into AWS Lambda? As usage of serverless technology grows, chaos engineering for serverless becomes even more crucial for ensuring reliable and available applications. Join us as we demo new capabilities for testing AWS Lambda, and unpack how these new faults have been built and run under the hood. Finally, learn valuable lessons gleaned from our customers' experiences with modern serverless applications.
Iris Sheu is a Sr. Technical Product Manager with AWS Reliability Services, and joined AWS in 2022. As a PM of AWS Fault Injection Service, she enjoys being able to help customers realize the value of resilience testing. She is focused on delivering products and tools that help customers build more confidence in their operational resilience. Iris is based in Washington, DC.
Saurabh Kumar is a Senior Solutions Architect based in North Carolina, USA, with a strong focus on resilience and chaos engineering. He is passionate about helping customers solve their business challenges and technical problems from migration to modernization & optimization leveraging his 20+ years of experience in tech industry. He is also an active contributor to the tech community, having authored several publications on chaos engineering, AWS Fault Injection Service, and observability strategies.
From a CEO's perspective, integrating DevOps with machine learning pipelines is key to strategic advantage, driving innovation, market agility, and operational efficiency. This presentation underscores real-world successes and views DevOps as crucial for future business growth.
How can DevOps, when seamlessly integrated with machine learning pipelines, become a powerhouse for innovation and competitive advantage? This exploration, from a CEO's perspective, unveils DevOps not just as a collection of practices and tools but as a pivotal asset in strategic business positioning and market leadership. Learn how melding DevOps with machine learning amplifies its strategic importance, driving product innovation, enhancing customer experiences, and facilitating agile responses to market dynamics. We will highlight concrete examples where the synergy of DevOps and machine learning has led to notable business successes, including market expansion, the swift introduction of innovative products or features, and achieving operational efficiencies. The presentation concludes with a visionary outlook on DevOps, enriched with machine learning, as a transformative force in business growth and evolution.
Ben Savage, serving as the visionary CEO of Veritas Automata, brings to the forefront an extensive background in revolutionizing autonomous transaction processing through the strategic integration of blockchain, smart contracts, and machine learning. His tenure at Veritas Automata is marked by an unwavering commitment to innovation, operational efficiency, and leveraging technology for transformative business solutions. Prior to leading Veritas Automata, Ben was instrumental in shaping the future of IoT solutions as the CTO and Chief Innovation Officer at Apex Supply Chain Technologies, where he pioneered the development of the groundbreaking Pizza Portal for Little Caesars among other complex IoT applications.
With a career spanning over two decades, Ben's expertise encompasses product development, embedded systems, cloud computing, and supply chain management, honed through significant roles at Apex, Maersk, and Computer Science Corporation. His pioneering work has been recognized with numerous domestic and international patents, underscoring his contribution to advancing technology and business practices. Furthermore, Ben's thought leadership and strategic insights have made him a valued member of multiple advisory boards, where he continues to influence the direction of technological innovation and industry standards.
At the helm of Veritas Automata, Ben Savage is not just leading a company; he is steering an industry towards a future where technology serves as a cornerstone for efficiency, security, and growth, embodying the ethos of innovation, improvement, and inspiration that Veritas Automata champions.
Join us for an exclusive panel featuring top DevOps and SRE engineers who are at the cutting edge of blending hardware and software. They’ll dive deep into the unique challenges of managing infrastructure, optimizing performance, and ensuring flawless integration between the physical and digital realms. Expect insider tips, innovative solutions, and a firsthand look at what it takes to excel in this dynamic field—perfect for anyone looking to elevate their DevOps and SRE game!
Currently, Ale is the Director of Engineering at Viam. With a strong background in tech leadership, she has driven impactful projects at companies like Stripe and Code Climate, advancing product scalability, reliability, and team performance. As the co-founder of Latinas in Tech NYC, Ale advocates for diversity and inclusion in the tech industry. She is passionate about fostering a collaborative environment where everyone can grow their skills, deliver impactful solutions, and build resilient, scalable systems.