Crossrail Place,
Canary Wharf,
E14 5AR, London, UK
Level -2
Tube access
Jubilee, Elizabeth and DLR lines: Canary Wharf station
Let's take a look at some of the basic NGINX metrics to monitor and what they indicate. We start with the application layer and move down through process, server, hosting provider, external services, and user activity. With these metrics, you get coverage for active and incipient problems with NGINX
Currently providing technical evangelism for NGINX, Dave works with DevOps, developers and architects to understand the advantages of modern microservice architectures and orchestration to solve large-scale distributed systems challenges, especially with open source and its innovation. Dave has been a champion for open systems and open source from the early days of Linux, through open distributed file systems like XFS, GFS, and GlusterFS to today's world of clouds and containers. He often speaks on topics such as the real-world issues associated with emerging software architectures and practices, on open source software and on creating new technology companies.
Dave has spoken on technical topics like distributed request tracing, modern monitoring practices, open sources projects from both corporations and foundation views, and on how open source innovations powers todays world.
Dave was named as one of the top ten pioneers in open source by Computer Business Review, having cut his teeth on Linux and compilers before the phrase "open source" was coined. Well versed in trivia, he won a Golden Penguin in 2002. When he's not talking, you can find him hiking with his trusty camera, trying to keep up with his wife.
The shift from traditional testing to chaos engineering marks a revolution in building reliable systems.
This session unpacks the concept’s history and role in ensuring system resilience. We’ll look at some of the approaches to chaos engineering, before looking at Chaos Engineering as a Service with Amazon’s Fault Injection Service.
You’ll leave with insights into crafting and running Fault Injection Service experiment templates, plus a live demo looking at how we can now test serverless code in AWS with Chaos engineering.
Starting out in a very traditional background in the data-centres of the fabled M4 corridor, Simon eventually realised it was time to give up on the problems of manually babysitting servers, racks and UPS's and migrated to the Cloud (and in the process, the Scottish Highlands) and now works to enable clients to transition their workloads, processes and swag requirements to the same route.
Simon is a member of the AWS Community builder program, and these days enjoys coaching and mentoring as much getting hands-on with code, automation and head-scratching.
Security isn’t just about prevention—it’s also about investigation. This talk dives into the forensic side of container security. We’ll explore how to analyze container images using Syft (SBOM generation), Grype (vulnerability scanning), and Trivy (multi-purpose scanner). Learn how to detect hidden risks, trace vulnerabilities back to their source, and build a repeatable forensic workflow that strengthens your DevSecOps pipeline. This isn’t just static analysis—it’s detective work for containers.
Hello, I'm Mert Polat.
I'm currently working as an Cloud and Platform Engineer at Sufle. I started my career as a Jr. DevOps Engineer at Zip Turkey, where I gained extensive experience in Infrastructure and DevOps domains. At Duzce MEKATEK, I worked in the software team for an autonomous vehicle project and took on a leadership role. Additionally, I honed my skills in technologies like Docker, Kubernetes, and Ansible during a DevOps internship at Formica.
I am passionate about technology and knowledge sharing, so I write various articles for @DevopsTurkiye and @Bulut Bilişimciler publications on Medium.
I graduated from Duzce University with a degree in Computer Programming, and I'm currently pursuing a bachelor's degree in Management Information Systems at Anadolu University.
In the ever evolving AI landscape, organizations are faced with the choice between open-source versus closed-source models. While many developers find it easier to get started in the closed source ecosystem, they quickly realize it is ultimately more expensive, inefficient, and doesn’t have the security and controls required for their applications. Alternatively, open source models have quickly caught up to their closed counterparts and now deliver a cheaper and just as accurate solution that provides organizations with the security they need to run their operations.
In addition, the pace of the open source community is iterating at speeds that are beyond belief and not only improving the accuracy of these models, but making them run even faster, especially on hardware that is better suited for AI beyond the GPU. With new open source models dropping every week and an active community fine-tuning even better versions daily, open source is making AI a commodity, with a wide range of cloud services for developers to choose from that make it even easier for them to use.
Join Amit Kushwaha, Director of AI Engineering, SambaNova, as he breaks down for attendees the advantages of open-source models, why they are critical for fast product iteration, and how organizations can use them to tap into the best minds globally, accelerate their pace of innovation, and position themselves to shape industry standards while continuously advancing enterprise-grade solutions.
Amit Kushwaha is the Director of AI Engineering at SambaNova Systems, leading the development and implementation of AI solutions that leverage SambaNova's differentiated hardware. Previously, as Principal Data Scientist at ExxonMobil, he led the organization’s digital transformation efforts, driving the strategy and execution of a multi-million dollar AI/ML portfolio that fostered innovation and enhanced operational efficiency at scale.
Passionate about harnessing technology to solve complex challenges, Amit specializes in developing innovative, business-focused solutions at the intersection of artificial intelligence, high-performance computing, and computer simulations. He holds a Ph.D. in Engineering from Stanford University.
Air-gapped systems are seen as the pinnacle of security, but are they truly untouchable? This talk explores real-world breaches—from Stuxnet to electromagnetic attacks—highlighting modern threats like supply chain risks and social engineering. Attendees will learn practical strategies to strengthen air-gapped environments through physical security, procedural controls, and advanced detection methods.
Sean Behan is a Senior Offensive Security Engineer at Oracle with a decade of experience across top-tier organizations including Google, AWS, the NSA, and the U.S. Navy. He specializes in offensive security, red teaming, and cloud security, with deep technical expertise in exploit development and adversary simulation. Sean holds OSCP and CISSP certifications and is passionate about continuous learning, regularly participating in CTFs and advanced cybersecurity research.
DevSecOps success isn’t just about picking the right tools—it’s about driving change across people, process, and technology. In this talk, we’ll go beyond the buzzwords to explore what it truly takes to embed security into modern software delivery.
We’ll start by demystifying the DevSecOps tooling landscape—covering the full acronym bingo (SAST, DAST, SCA, CSPM, etc.), highlighting major vendors, and sharing a practical framework for choosing the right tools for your environment. But that’s just the beginning.
We’ll explore SDLC maturity models and how to create a DevSecOps roadmap that aligns with your organization's goals. You’ll also learn how to approach organizational change with principles borrowed from change management—because tooling alone won’t get you there.
Finally, we’ll focus on the human side: the social skills you need to overcome resistance, influence stakeholders, and craft internal communications that get attention and drive adoption.
Whether you're just getting started or trying to take your DevSecOps efforts to the next level, this session equips you with the full advocate toolkit—technical insight, strategic perspective, and the soft skills to make it all stick.
Seb Coles is a seasoned security leader and engineer with a passion for making DevSecOps work in the real world—not just in theory. He’s held senior security roles at Clarks, ClearBank, LRQA, and Seccl, and worked as a senior consultant at Veracode, helping organizations of all sizes embed security into their software delivery.
At ClearBank, Seb built and led the security team that earned a Highly Commended recognition at the 2023 Computing Awards for Best DevSecOps Implementation. His approach goes beyond tools—focusing on strategy, organizational change, and the often-overlooked human side of DevSecOps.
Originally a software engineer, Seb has learned the hard way how crucial influence, communication, and cultural alignment are to making security stick. He’s stepped on plenty of rakes so others don’t have to—and now shares those lessons through talks, consulting, and national conference appearances.
Seb brings a candid, practical perspective to DevSecOps, grounded in hands-on experience and a deep understanding of both the tech and the people behind it.
In this hands-on session, I'll demonstrate how AI can revolutionize your Terraform testing workflow. Watch as I transform a complex terraform module with zero test coverage into a battle-hardened, production-ready component in under 30 minutes.
You'll learn how to:
Leverage AI to quickly generate comprehensive test cases across unit, mock, and integration tests Build test suites that validate the full spectrum of your Terraform modules from configuration to outputs Implement advanced testing patterns with mock providers to simulate AWS resources Create maintainable test coverage reports that highlight gaps and provide actionable insights Adopt a test-driven development approach that scales with your infrastructure We'll showcase real examples, demonstrating how this approach has reduced our testing time by 90% while increasing code quality and developer confidence. Whether you're new to Terraform testing or looking to enhance your existing test suites, you'll walk away with actionable techniques to implement immediately.
Dharani Sowndharya is a Lead DevOps Engineer at Equal Experts, with nearly a decade of experience in the tech industry. For the past eight years, she has specialized in Cloud and DevOps, working on large-scale cloud migrations, site reliability engineering (SRE), and designing self service data platforms using Kubernetes on AWS. Her strong foundation in cloud infrastructure and DevOps practices has made her a trusted expert in delivering scalable, reliable solutions.
Dharani is also passionate about mentoring and sharing knowledge. She frequently speaks at tech conferences and actively supports the growth of emerging engineers in the community.
Outside of work, she enjoys playing foosball and has an enviable collection of board games, which she loves to play with friends and family.
Many teams rely on staging environments to catch bugs before production — but what if that’s actually slowing you down and giving false confidence? In this talk, I’ll share how we eliminated our staging environment and built a pipeline that delivers faster, safer releases. We’ll cover the practical tooling we used, how feature flags and canary releases play a critical role, and the cultural mindset shift that made it work. You’ll walk away with concrete ideas for reducing deployment risk without relying on brittle pre-prod setups.
I got my first full-time developer job in 1999 and have been in software ever since. Over the years, I’ve worn many hats — developer, engineering manager, product manager — and I love building great products and teams. I’m currently CTO at Ivelum, where we create custom software for startups and enterprises, and I also work on Teamplify, a team management suite for engineering teams.
We’re in the midst of an AI revolution—and APIs are its unsung heroes. While LLMs and AI agents grab headlines, it's APIs that power their ability. Behind every AI-generated insight, recommendation, or automated task is an API call connecting the model to the tools, services, and data it needs to get the job done.
As AI systems evolve from passive assistants to autonomous agents capable of decision-making and execution, APIs have become the essential infrastructure enabling this transformation. They are no longer just integration tools—they are the action layer of AI.
In this talk, we’ll explore how APIs are shaping the future of intelligent automation. Using real-world examples from across industries, we’ll examine how companies are leveraging APIs to orchestrate multi-step workflows, access real-time data, and drive operational efficiency with AI. Organizations with robust, scalable, and discoverable API ecosystems will not only keep up—they’ll lead. If AI is the recipe, APIs are the ingredients. It's time we start treating them that way.
What You’ll Learn: - The shift from human-first to machine-first consumption patterns in API design - Emerging standards that are streamlining AI-API interactions - Strategies to future-proof your API ecosystem for the intelligent systems of tomorrow
Talia Kohan is a Staff Developer Advocate at Postman, helping developers build, test, and scale APIs and AI-powered applications. A global keynote speaker, she works closely with the developer community to share best practices in modern software development and foster innovation through practical education and collaboration.