Crossrail Place,
Canary Wharf,
E14 5AR, London, UK
Level -2
Tube access
Jubilee, Elizabeth and DLR lines: Canary Wharf station
Coming soon
Peter Marshall is a technology leader and community builder with a background in developer relations, data architecture, and digital transformation. As Director of Developer Relations at Imply, he leads programs that grow and engage the global Apache Druid community through education, support, and events. With experience across startups, enterprises, and the public sector, Peter brings a blend of technical expertise and strategic vision to help organizations connect with developers and drive impact through open source.
Let's take a look at some of the basic NGINX metrics to monitor and what they indicate. We start with the application layer and move down through process, server, hosting provider, external services, and user activity. With these metrics, you get coverage for active and incipient problems with NGINX
Currently providing technical evangelism for NGINX, Dave works with DevOps, developers and architects to understand the advantages of modern microservice architectures and orchestration to solve large-scale distributed systems challenges, especially with open source and its innovation. Dave has been a champion for open systems and open source from the early days of Linux, through open distributed file systems like XFS, GFS, and GlusterFS to today's world of clouds and containers. He often speaks on topics such as the real-world issues associated with emerging software architectures and practices, on open source software and on creating new technology companies.
Dave has spoken on technical topics like distributed request tracing, modern monitoring practices, open sources projects from both corporations and foundation views, and on how open source innovations powers todays world.
Dave was named as one of the top ten pioneers in open source by Computer Business Review, having cut his teeth on Linux and compilers before the phrase "open source" was coined. Well versed in trivia, he won a Golden Penguin in 2002. When he's not talking, you can find him hiking with his trusty camera, trying to keep up with his wife.
The shift from traditional testing to chaos engineering marks a revolution in building reliable systems.
This session unpacks the concept’s history and role in ensuring system resilience. We’ll look at some of the approaches to chaos engineering, before looking at Chaos Engineering as a Service with Amazon’s Fault Injection Service.
You’ll leave with insights into crafting and running Fault Injection Service experiment templates, plus a live demo looking at how we can now test serverless code in AWS with Chaos engineering.
Starting out in a very traditional background in the data-centres of the fabled M4 corridor, Simon eventually realised it was time to give up on the problems of manually babysitting servers, racks and UPS's and migrated to the Cloud (and in the process, the Scottish Highlands) and now works to enable clients to transition their workloads, processes and swag requirements to the same route.
Simon is a member of the AWS Community builder program, and these days enjoys coaching and mentoring as much getting hands-on with code, automation and head-scratching.
Security isn’t just about prevention—it’s also about investigation. This talk dives into the forensic side of container security. We’ll explore how to analyze container images using Syft (SBOM generation), Grype (vulnerability scanning), and Trivy (multi-purpose scanner). Learn how to detect hidden risks, trace vulnerabilities back to their source, and build a repeatable forensic workflow that strengthens your DevSecOps pipeline. This isn’t just static analysis—it’s detective work for containers.
Hello, I'm Mert Polat.
I'm currently working as an Cloud and Platform Engineer at Sufle. I started my career as a Jr. DevOps Engineer at Zip Turkey, where I gained extensive experience in Infrastructure and DevOps domains. At Duzce MEKATEK, I worked in the software team for an autonomous vehicle project and took on a leadership role. Additionally, I honed my skills in technologies like Docker, Kubernetes, and Ansible during a DevOps internship at Formica.
I am passionate about technology and knowledge sharing, so I write various articles for @DevopsTurkiye and @Bulut Bilişimciler publications on Medium.
I graduated from Duzce University with a degree in Computer Programming, and I'm currently pursuing a bachelor's degree in Management Information Systems at Anadolu University.
In the ever evolving AI landscape, organizations are faced with the choice between open-source versus closed-source models. While many developers find it easier to get started in the closed source ecosystem, they quickly realize it is ultimately more expensive, inefficient, and doesn’t have the security and controls required for their applications. Alternatively, open source models have quickly caught up to their closed counterparts and now deliver a cheaper and just as accurate solution that provides organizations with the security they need to run their operations.
In addition, the pace of the open source community is iterating at speeds that are beyond belief and not only improving the accuracy of these models, but making them run even faster, especially on hardware that is better suited for AI beyond the GPU. With new open source models dropping every week and an active community fine-tuning even better versions daily, open source is making AI a commodity, with a wide range of cloud services for developers to choose from that make it even easier for them to use.
Join Amit Kushwaha, Director of AI Engineering, SambaNova, as he breaks down for attendees the advantages of open-source models, why they are critical for fast product iteration, and how organizations can use them to tap into the best minds globally, accelerate their pace of innovation, and position themselves to shape industry standards while continuously advancing enterprise-grade solutions.
Amit Kushwaha is the Director of AI Engineering at SambaNova Systems, leading the development and implementation of AI solutions that leverage SambaNova's differentiated hardware. Previously, as Principal Data Scientist at ExxonMobil, he led the organization’s digital transformation efforts, driving the strategy and execution of a multi-million dollar AI/ML portfolio that fostered innovation and enhanced operational efficiency at scale.
Passionate about harnessing technology to solve complex challenges, Amit specializes in developing innovative, business-focused solutions at the intersection of artificial intelligence, high-performance computing, and computer simulations. He holds a Ph.D. in Engineering from Stanford University.
Air-gapped systems are seen as the pinnacle of security, but are they truly untouchable? This talk explores real-world breaches—from Stuxnet to electromagnetic attacks—highlighting modern threats like supply chain risks and social engineering. Attendees will learn practical strategies to strengthen air-gapped environments through physical security, procedural controls, and advanced detection methods.
Sean Behan is a Senior Offensive Security Engineer at Oracle with a decade of experience across top-tier organizations including Google, AWS, the NSA, and the U.S. Navy. He specializes in offensive security, red teaming, and cloud security, with deep technical expertise in exploit development and adversary simulation. Sean holds OSCP and CISSP certifications and is passionate about continuous learning, regularly participating in CTFs and advanced cybersecurity research.
DevSecOps success isn’t just about picking the right tools—it’s about driving change across people, process, and technology. In this talk, we’ll go beyond the buzzwords to explore what it truly takes to embed security into modern software delivery.
We’ll start by demystifying the DevSecOps tooling landscape—covering the full acronym bingo (SAST, DAST, SCA, CSPM, etc.), highlighting major vendors, and sharing a practical framework for choosing the right tools for your environment. But that’s just the beginning.
We’ll explore SDLC maturity models and how to create a DevSecOps roadmap that aligns with your organization's goals. You’ll also learn how to approach organizational change with principles borrowed from change management—because tooling alone won’t get you there.
Finally, we’ll focus on the human side: the social skills you need to overcome resistance, influence stakeholders, and craft internal communications that get attention and drive adoption.
Whether you're just getting started or trying to take your DevSecOps efforts to the next level, this session equips you with the full advocate toolkit—technical insight, strategic perspective, and the soft skills to make it all stick.
Seb Coles is a seasoned security leader and engineer with a passion for making DevSecOps work in the real world—not just in theory. He’s held senior security roles at Clarks, ClearBank, LRQA, and Seccl, and worked as a senior consultant at Veracode, helping organizations of all sizes embed security into their software delivery.
At ClearBank, Seb built and led the security team that earned a Highly Commended recognition at the 2023 Computing Awards for Best DevSecOps Implementation. His approach goes beyond tools—focusing on strategy, organizational change, and the often-overlooked human side of DevSecOps.
Originally a software engineer, Seb has learned the hard way how crucial influence, communication, and cultural alignment are to making security stick. He’s stepped on plenty of rakes so others don’t have to—and now shares those lessons through talks, consulting, and national conference appearances.
Seb brings a candid, practical perspective to DevSecOps, grounded in hands-on experience and a deep understanding of both the tech and the people behind it.
In this hands-on session, I'll demonstrate how AI can revolutionize your Terraform testing workflow. Watch as I transform a complex terraform module with zero test coverage into a battle-hardened, production-ready component in under 30 minutes.
You'll learn how to:
Leverage AI to quickly generate comprehensive test cases across unit, mock, and integration tests Build test suites that validate the full spectrum of your Terraform modules from configuration to outputs Implement advanced testing patterns with mock providers to simulate AWS resources Create maintainable test coverage reports that highlight gaps and provide actionable insights Adopt a test-driven development approach that scales with your infrastructure We'll showcase real examples, demonstrating how this approach has reduced our testing time by 90% while increasing code quality and developer confidence. Whether you're new to Terraform testing or looking to enhance your existing test suites, you'll walk away with actionable techniques to implement immediately.
Dharani Sowndharya is a Lead DevOps Engineer at Equal Experts, with nearly a decade of experience in the tech industry. For the past eight years, she has specialized in Cloud and DevOps, working on large-scale cloud migrations, site reliability engineering (SRE), and designing self service data platforms using Kubernetes on AWS. Her strong foundation in cloud infrastructure and DevOps practices has made her a trusted expert in delivering scalable, reliable solutions.
Dharani is also passionate about mentoring and sharing knowledge. She frequently speaks at tech conferences and actively supports the growth of emerging engineers in the community.
Outside of work, she enjoys playing foosball and has an enviable collection of board games, which she loves to play with friends and family.
Many teams rely on staging environments to catch bugs before production — but what if that’s actually slowing you down and giving false confidence? In this talk, I’ll share how we eliminated our staging environment and built a pipeline that delivers faster, safer releases. We’ll cover the practical tooling we used, how feature flags and canary releases play a critical role, and the cultural mindset shift that made it work. You’ll walk away with concrete ideas for reducing deployment risk without relying on brittle pre-prod setups.
I got my first full-time developer job in 1999 and have been in software ever since. Over the years, I’ve worn many hats — developer, engineering manager, product manager — and I love building great products and teams. I’m currently CTO at Ivelum, where we create custom software for startups and enterprises, and I also work on Teamplify, a team management suite for engineering teams.
We’re in the midst of an AI revolution—and APIs are its unsung heroes. While LLMs and AI agents grab headlines, it's APIs that power their ability. Behind every AI-generated insight, recommendation, or automated task is an API call connecting the model to the tools, services, and data it needs to get the job done.
As AI systems evolve from passive assistants to autonomous agents capable of decision-making and execution, APIs have become the essential infrastructure enabling this transformation. They are no longer just integration tools—they are the action layer of AI.
In this talk, we’ll explore how APIs are shaping the future of intelligent automation. Using real-world examples from across industries, we’ll examine how companies are leveraging APIs to orchestrate multi-step workflows, access real-time data, and drive operational efficiency with AI. Organizations with robust, scalable, and discoverable API ecosystems will not only keep up—they’ll lead. If AI is the recipe, APIs are the ingredients. It's time we start treating them that way.
What You’ll Learn: - The shift from human-first to machine-first consumption patterns in API design - Emerging standards that are streamlining AI-API interactions - Strategies to future-proof your API ecosystem for the intelligent systems of tomorrow
Talia Kohan is a Staff Developer Advocate at Postman, helping developers build, test, and scale APIs and AI-powered applications. A global keynote speaker, she works closely with the developer community to share best practices in modern software development and foster innovation through practical education and collaboration.
If you ask 10 DBAs in a conference about putting data on Kubernetes, most will say that’s a bad idea – Divine.
Divine is an advocate for data on Kubernetes, and a Data on Kubernetes Ambassador.
Shivadeep, on the other hand, as an Oracle DBA, traditionally believed that containers and Kubernetes weren't suitable for persistent workloads like databases.
And that’s rightly so because it was originally designed to be stateless. But he didn’t know a lot had changed.
With no prior experience in containers, Kubernetes, or NodeJS, Shivadeep embarked on a challenging project to build a scalable Pacman game. This journey not only helped him learn new technologies but also changed his perspective on using Kubernetes for database workloads.
Shivadeep discovered why and how Kubernetes is increasingly becoming a popular choice for running databases as well.
Shivadeep will start this talk sharing his story of learning and exploration, highlighting:
To make the session even more engaging, Shivadeep will provide a hands-on opportunity for attendees to try out the Pacman game!
Towards the end of the talk, Divine will share how DBAs can evolve to manage databases on Kubernetes.
Currently the Founder of EverythingDevOps, Divine Odazie is a Developer Relations Professional and Technical Writer with over 5+ years of experience in Technology and a track record in Backend Engineering, DevOps, Cloud Native and Developer Relations on a global scale. He has given talks/workshops at developer conferences like Open Source Summit Europe, KubeCon North America, Cloud Native Rejekts, and Ansible Contributor Summit. He also supports the cloud native community in Africa as a KCD (Kubernetes Community Days) Organizer, and other communities empowering Africa with Technology.
Shivadeep Gundoju is a seasoned database administrator at ING Netherlands , who loves to automate everything around databases and its Infrastructure. He holds a Master's degree in Computing Systems & Infrastructure and excited to explore Oracle and other RDBMS on containers/Kubernetes.
With our traditional operations setup supporting monolithic systems – as the tech implementation scaled, there was a proportionate increase in costs to support and maintain the systems. This had been primarily due to segregation in organization setup to deliver business outcomes. While one part of the tech organization was focused on build, the other part was focused on maintain. With differing goal-post and motivation to deliver around collective business impact, the focus was divergent which led to undesirable customer experience and unsustainable costs.
As the tech landscape became more distributed and decoupled, we saw this as an opportunity to introduce SRE adoption, where we managed to build an integrated setup with product teams and embedded SRE taking full accountability from build to maintain for the product even as the implementation scaled across multiple markets. While this setup has proved to be impactful over the last couple of months, it has been a continuous effort to tie Service Level Indicators and Objectives (SLI/SLO) to Business KPI's thereby showcasing direct impact to business.
Will delve into ways organizations can successfully adopt SRE thereby building an integrated and impactful setup.
Mitul Jain Digital Transformation Leader | SRE & DevOps Expert | Global Technology Executive
With over 23 years of global leadership experience, a recognized expert in Site Reliability Engineering (SRE), DevOps, and enterprise-scale automation. I have led transformative initiatives across industries, reshaping traditional IT operations into modern, business-aligned engineering practices.
I have built and scaled global SRE organizations, integrated reliability engineering into digital value streams, and enabled Build-to-Operate models that drive measurable business outcomes—including zero P1 incidents during peak periods and significant cost optimizations. As a passionate advocate for innovation and community building, I founded SRE Community of Practice within the organization and has led strategic alliances with platform providers to enhance service capabilities and go-to-market strategies.
I often speak on topics including digital transformation, reliability engineering, DevOps at scale, and the future of IT operations. His insights are shaped by hands-on experience across Consumer Goods, BFSI, Telecom, Manufacturing, and Travel & Hospitality.
In our world where everything is code, reliability extends beyond clean and reliable code running on the right infrastructure. It requires a robust sociotechnical system, the dynamic interplay between social and technical components. Our North Star is an engineering culture, built on shared beliefs, practices and behaviours that shape how we operate, solve problems, collaborate, innovate and continuously learn.
But how do we achieve this engineering culture, where innovation and success are driven by collaboration, trust, autonomy, and passion? What is our code of conduct and how does it propel us forward?
In this talk, we will take you through our 5-year journey in Reliability Advocacy. We started out as a group of 10 enthusiastic engineers from different platform and enablement teams who wanted to share knowledge of our reliability product offering and practices across the organization. We are now a household name, run our annual Reliability Event conference which are attended by 300+ engineers every year and our reliability trainings are part of the curriculum for all new joiners and we've guided hundreds of engineers on their path to defining SLIs and SLOs. In part, thanks to our reliability advocates, reliability is now one of our engineering pillars for years to come shaping our organization in the process, raising our general availability rating from 99.54% to 99,87% in the process.
We will share what worked for us, what didn’t, and how we gradually embedded ourselves into our large, regulated, and risk-averse organization with over 15.000 engineers. We will reveal the code that helped us shape our SRE practices and make this transformation possible. In doing so, we will share a 5-step plan to start a reliability advocacy function within your organization.
Stephan Mousset is the Global Product Manager for the Performance & Resilience Engineering Platform at ING, where he also serves as Lead Reliability Advocate. With nearly two decades of experience at ING, Stephan has led major efforts in performance testing automation, SLI/SLO adoption, and reliability engineering at scale.
He is an active voice in the DevOps and SRE communities — co-organizing DevOpsDays Amsterdam, Site Reliability Engineering NL, and ING’s internal Reliability Event.
Stephan has shared his insights at multiple conferences, including: - SLOconf 2022: ING’s global SLO rollout and lessons learned - SRE NL Meetup at ING (2023): Building a platform for SLO-driven performance engineering - DevOpsDays Berlin 2024 (Ignite): "Unleashing DevOps Magic in Performance Testing & Analysis Automation"
His talks focus on practical, platform-enabled approaches to making reliability and resilience part of everyday engineering — through automation, observability, and culture.
How do you build reliable systems where downtime is not just an inconvenience, but a threat to trust, income, and even safety? In this talk, Roosevelt Elias, founder of Payble, explores how to establish SRE principles in markets where infrastructure is unreliable, cloud access is intermittent, and talent pipelines are still emerging. Drawing from real-world experience building financial infrastructure in Africa, we’ll discuss culturally aware incident management, lightweight observability stacks, distributed troubleshooting with limited tooling, and what it means to bake resilience into the DNA of both product and process even when the odds are stacked against you.
This talk is ideal for SREs, platform engineers, and product leaders building for emerging markets or aiming to design more resilient systems globally.
Roosevelt Elias is a visionary solutions architect, product strategist, technology entrepreneur, and founder of Payble, a next-generation product technology company focused on solving complex economic and digital inclusion challenges for micro and small businesses across Africa and globally. With over a decade of experience spanning product design, payments, and creative technology, Roosevelt has built companies at the intersection of design, technology, and social impact. including a successful exit in the print media space and a thriving live-streaming SaaS business that has powered events for global artists and thought leaders.
A trained computer scientist with a background in IT Security, Roosevelt is passionately committed to creating tools that enable underserved businesses to thrive with the same capabilities as Fortune 500 enterprises, through intelligent systems, AI-driven insights, and user-first design. Under his leadership, Payble is redefining the role of financial technology, not just as a utility, but as an ecosystem that transforms local businesses into global players.
He is a bold thinker, deeply rooted in service, faith, and sustainable impact, building Payble to be a company that will outlive generations.
Why Terragrunt Matters If you've worked with Terraform in production, you've likely encountered the pain of managing multiple environments, duplicate configuration files, and complex remote state setups. Terragrunt solves these common Infrastructure as Code challenges by providing a thin wrapper around Terraform that promotes DRY (Don't Repeat Yourself) principles and simplifies configuration management.
What You'll Learn This talk will take you from Terragrunt zero to hero, covering: Foundation & Setup
What Terragrunt is and why it exists - Key differences between Terraform and Terragrunt - Installation and initial configuration
Pre-requisites: - Basic AWS knowledge - Basic Terraform knowledge
Ceyda serves as a Cloud and Platform Engineer at Sufle. She has previously worked as a Software Developer at different companies. She has experience particularly in technologies such as AWS, Kubernetes, and Python. She completed her Master's degree in Software Engineering at Boğaziçi University and graduated from Sabancı University with a Bachelor's degree in Computer Science and Engineering. She holds a HashiCorp Certified Terraform Associate certification and has been using Terraform and Terragrunt for big production workloads.
Sustainability is increasingly becoming a priority in the Information Technology sector, which has fueled the demand for energy-efficient solutions in all computing environments. Effective management of resources in Kubernetes environments requires proper monitoring and optimisation of power usage. Kepler (Kubernetes-based Efficient Power Level Exporter) solves this problem by offering a solid solution for energy monitoring at the pod level. Using software counters, tailored machine learning models, and the Cloud Native benchmark suite, Kepler provides accurate energy consumption estimations and detailed reports on power usage. Developers and system administrators can thus make informed choices towards environmentally friendly and energy-efficient Kubernetes operations.
Mayank Goyal is a Senior Site Reliability Engineer at Okta, specialising in maintaining Workflows systems' scalability and reliability. Before working at Okta, he honed his DevOps & Software engineering skills at Zoom and RedHat, where he grew deeply passionate about automation. While at RedHat, he was instrumental in automating and configuring the Release Pipeline in RHELWF. Apart from his professional obligations, Mayank is a passionate supporter of open-source technologies and likes being on the cutting edge. He loves connecting with the technology community, exchanging thoughts, and learning from others.
Platform engineers work hard to build great tooling and automations for developers, but often struggle to get feature teams to adopt the platform to its full potential. Meanwhile, SREs are buried in incident firefighting and can’t keep up with onboarding new services or proactive reliability initiatives.
Turns out these two challenges could be solved by tackling them together. In this talk, we’ll share how we combined platform engineering and SRE into one hybrid responsibility that doesn’t just ship tooling, it helps teams actually adopt it.
We’ll show how our Platform SREs make new services “reliable by default” with out-of-the-box observability, alerts, and SLOs.
But for those older, messier services? We send someone in. Embedded SRE style, but for a limited time and scope.
We’ll walk through how we structure these short-term embed missions, what’s worked (and what’s flopped), and how this helped adoption go way up without burning anyone out. If you’re tired of begging teams to migrate or your SREs are on the verge, this one’s for you.
Jorge is a Reliability Advocate at Rootly and the author of the Linux Foundation Introduction to Backstage (LFS142) course. He has a background in software engineering (ex-PayPal) and digital communication (UCLA). He's also a certified sommelier (CETT Barcelona).