Oracle OpenWorld 2017: Day 1 Wrap Up

I wasn’t sure what to expect from my first OpenWorld conference.  If the rest is anything like day 1 then I think it will be a great week.  There was plenty of marketing (as to be expected at a vendor sponsored conference), but a lot of useful content as well.  The highlights for me were the sessions on Kafka, Docker, and Data Guard.  All contained information I can take back to the team and put in use immediately.  I was impressed with the session on CloudDBA (and did not miss the parallel to Oracles announcement for their autonomous database), but would love to have seen more detail on the architecture.  What do they use for monitoring?  What platform are they using for log streaming?  etc.

I hope that the rest of the week provides similar information and ideas that I can take back.

Open World 2017: Mobile- and Bot-Enable Your Oracle Enterprise Apps

I have been a big proponent of chat bots ever since attending Percona Live 2015 when I first heard about their use at other companies.  It has not caught on where I work, but I thought I would attend a session to see what options existed from Oracle.  They presented a pretty interesting set of platforms that can help you quickly stand up mobile and chat based systems.  My notes are:

Oracle Mobile Cloud Enterprise

  • Mobile Core (MCS)
    • Built-in Mobile Services
    • Mobile API Catalog
    • MAX with Mobile Services
  • Intelligent Bots (IB)
    • AI/NLP Intent Detection
    • Multi Chanel Integration
    • Rich Conversational Flow
    • Entity Detection & Slot Filling
  • Customer Experience Analytics (CxA)
  • Challenge is in integrating back-end systems
    • OMCe – Mobile Services
      • Open source mobile client development tools (JET)
      • Tools for service devs to shape Mobile optimized APIs
  • New pieces are Bots and Customer Experience
  • AI first will replace “cloud first, mobile first” in next 10 years
  • App sprawl getting out of control
  • Integrates with a number of chat clients, voice, alexa, etc…
  • Value of chatbot is being able to integrate into a number of back-end systems
  • CxA – API to instrument your app

Case Study: Brocade

  • Looking to integrate Oracle EBS, Salesforce, Taleo, Microsoft Exchange, etc.
  • Used Mobile Iron to push app
  • Have mobile now and working on bot
  • Works with 11i and R12
  • Worked with Rapid Value for this


  • 50 – 60% of developers time is consumed in backend development
  • App fatigue
    • Too many apps – consumer and enterprise
    • Complexity in managing and upgrading enterprise apps
    • No common place to see all notifications for enterprise apps
  • Started off with pre-build mobile app offered by Rapid Value
  • MCS brought the following to the table
    • Enterprise Security
    • MDM Integration
    • Analytics and Reporting
    • Instance Life Cycle MAnagement
    • Mobile API Catalog
    • Offline Sync, Push Notifications
  • No infrastructure at Brocade
  • Strategic Advantages aimed at using Chatbots
    • Conversational mode Convenience
    • Reduce App store approval challenges
    • No App hosting and distribution
    • Zero User Training
  • Chatted with Oracle bot using Facebook Messenger


Ability to test the bot as well

YAML used for dialogue flow builder

OpenWorld 2017: Modernize your IT-Landscape with API Driven Development

Sven Bernhardt and Danilo Schmiedel gave a session that mostly turned into a sales pitch for a product Oracle purchased named Apiary.  It was a pretty cool tool, but not something that closes any holes where I work.  My notes are:

  • API Mangement
    • more than 12,000 public APIs offered by companies (2015)
    • Salesforce makes 50% revenue via API
      • Expedia 90%
      • ebay 60%
    • APIs are the door to the digital tomorrow
      • used to discover new business models and to evolve digital economies
    • Can support key business goals
      • Revenue Growth
      • Customer Satisfaction & Engagement
      • Operational Effeciencies
      • Partner Contribution & Ecosystem
    • What does it provide
      • Approved APIs for app developers
      • Provides design guidance for backend API developers
  • Taming the Monolith – Challenges + Demo
    • see pic for sample of Monolith Application
    • Task: Improve customer satisfaction with new innovative apps and modernized what we have today to increase our flexibility
    • see pic for decoupling and API breakdown
  • Architectural Considerations


OpenWorld 2017: Docker 101 for Oracle DBAs

My team is looking at changing the way we manage Oracle.  First, we are looking at migrating off of Exadata and onto a different architecture.  But more fundamentally, we are questioning how we deliver database services and how we can drive innovation within the company by changing our processes and/or technologies.

Docker has come up as one such technology.  We have a lot of developers moving toward microcservice architectures and they need a good way to quickly spin up databases for development.  We also need to better standardize our processes across data platforms and improve our ability to scale environments based on a number of factors.  Adeesh Fulay did a great job of showing the state of containers as it pertains to the Oracle database.  The short is that Oracle and LXC are very mature, while Docker is quickly closing.  I am still very interested in seeing how we can leverage Docker containers and orchestration to improve our efficiency with Oracle as well as our other platforms.  My notes are:

Why Containers?

  • Comparison to Shipping Goods
  • Intermodal Shipping Contianer
  • Issue similar with IT
    • App with list of steps to deploy
    • Prone to mistakes
    • Need to solve dependency challenges
    • Also need to make good on promise of virtualization to provide mobility
  • Docker
    • Developer: Build once, Run anywhere
    • Operator: Configure once, Run Anything

Linux Container Overview

  • VM contains app and: Guest OS, libraries, etc.
  • Containers: No OS or Hypervisor
    • Own network interface
    • file system
    • Isolation (security) – linux kernel’s namespaces
    • Isolation(resource usage) – linux kernel’s cgroups
  • Why
    • OS-Based Lightweight Virtualization technology
    • Content & Resource Isolation
    • Better Performance
    • Better Efficiency and smaller footprint
    • Build, Ship, Deploy anywhere
    • Separation of Duties
  • Types
    • Application Containers
      • Docker
        • Requires application to be repackaged and reconfigured to work with Docker image format
      • One container, one app (primary app)
      • patch by replacing containers
      • good for modern applications
    • System Containers
      • Each container runs an entire service stack
      • LXC, OpenVZ, Solaris Zones
        • No need to reconfigure applications
      • Patch in place
      • good for legacy apps
    • Oracle and Docker
    • No RAC on Docker
    • LXC Support

Look Under the Hood

  • LXC is more mature with Oracle
  • Docker Host
    • daemon/engine – manages networking, images, containers, (data) volumes, plugins, orchestration, etc.
    • images
    • Dockerfile
      • Instructions for turning base image into a new docker image
    • containers
      • Like a directory. It is created from an image, and contains everything required to run an app.  Containers like VMs have state and are portable.
      • Containerd: Simple daemon that uses run times to manage
    • Volumes
      • Supports local or shared/distributed
      • Choose between mobility & performance
      • Docker outsourced issue of storage to other vendors
    • Networking
      • Types:
        • None: no external network
        • Host: shard host n/w stack
        • Bridge: NATing, exposing ports, poor performce
        • macvlan: uses host subnet, VLAN, swarm mode
        • Overlay: multi-host VXLAN, swarm mode
        • 3rd Party: OpenvSwitch, Weave, etc..
    • Orchestration
      • kubernetes
      • Mesos
      • Docker Swarm
      • Rancher
      • Nomad
      • ROBIN
  • Docker client
    • docker build
    • docker pull
    • docker run
  • Registry – Public or Private contains pre-build images

Getting Hands Dirty

Containerized Databases

Oracle late to the party but quickly catching up.  Has joined CNCF.  Working with kubernetes.  Using in Oracle cloud.


  • Advantages of VMs without the performance penalty





OpenWorld 2017: MySQL Automatic Diagnostics: System, Mechanism, and Usage

Shangshun Lei and Lixun Peng from Alibaba Cloud discussed in this session a system they have built called CloudDBA to automate a lot of traditional DBA roles.  The system sounds pretty cool and is what my team should be aspiring to do.  But there was not a lot of information on the how, and a lot of the what for CloudDBA.  My notes from the session are:

  • Why CloudDBA
    • Reduce Costs –
      • 80% of time spent on finding root cause, optimizing performance, scaling hardware and resources
      • 20% on database platform
    • Focus your resources on business
    • Provide best technology
  • Architecture
    • Kafka/JStorm for log collection
    • Offline Data repository
      • Error log
      • slow log
      • audit log
      • cpu/ios/status
    • Offline diagnostics
      • Top SQL Analyss
      • Trx Analysis
      • SQL Review
      • Deadlock Analysis
    • Online diagnostics
      • Knowledge base – rule engine
      • Inference Engine – matches for conditions and runs execution to resolve or provide advise to users
    • Realtime Event and advise
      • Slave delay
      • Config Tuning
      • Active Session
      • Lock and Transaction
      • Resource
  • Rule Engine
    • Immediate detection of useful changes with low cost
    • Choose correct inference model
      • Database global status is mature and easy to get
      • High frequency monitoring to make sure no useful info is missed
      • Real time state change detection algorithms
      • Importance of database experience
  • Knowledge Base and inference engine
    • Ability to accumulate DBA experts’ experience in short time
    • Accurate issue detection & corresponding advice
  • Offline diagnosis
    • Audit log does matter
    • Record full SQLs for database
    • A feature of AliSQL, no performance impact
  • Transaction analyiss
    • uncommitted transactions
    • long transactions
    • long interval between transactions statements
    • big transactions
  • SQL review
    • how many types of sql
    • how many types of transactions
    • sqls or sequence in transaction is expected or not
    • scan rows, return rows, elapsed time and sql advise
  • Top SQL
    • top sql before optimize
    • help explain questions such as why my cpu is 100%
    • different statistics dimensions and performance metrics
  • SQL Advisor
    • Not a database optimizer
    • built outside of MySQL kernel
    • query rewriter
    • follow rules to create indexes that works for the lowest cost

OpenWorld 2017: Kafka, Data Streaming and Analytic Microservices

My company has been wrestling with the need for a more scalable and standardized method of integrating applications for a while.  We have taken steps toward moving to an enterprise data bus to improve how we do this.  Stewart Bryson’s session was a great example of how Kafka can be used to do this.  While his focus was on BI and data integration, he did a great job of showing how Kafka improves the story for other use cases as well.   My notes from his session are:


  • History Lesson
    • Traditional Data Warehouse
      • ETL -> Data Warehouse -> Analytics
    • All analytic problems can’t be solved by this paradigm
      • Realtime events
      • Mobile Analytics
      • Search
      • Machine Learning – Old BI prejudices pattern matching
  • Analytics is not just people sitting in front of dashboards making decisions
  • Microservices
    • “The microservice architecture is an approach to developing a single application as a suite of small services, each running its own process and communicating with lightweight mechanisms, often an HTTP resource API.  These services are built around business capabilities and independently deployable by fully automated deployment machinery.” – Martin Fowler
    • Opposite of the Monolith
    • separate but connected
    • API contracts
  • Proposed Kafka as single place to ingest your data
  • Use Kafka as our data bus
  • “Analytics applications” are different than “BI platforms”
    • Data is the product
    • Needs SLAs and up times (internal or external)
  • Apache Kafka
    • Kafka – not state of the table but more like the redo log (true data of the database)
    • capturing events not state
    • sharded big data solution
    • topics – schemaless “table”
    • schema on read not on write
      • If you have schema at all
      • just bytes on a message
      • you choose the serialization
    • topics replicated by partitions across multiple nodes (named broker in kafka)
    • confluent is open sourced kafka with add ons (also have paid versions)
    • producers put data into kafak
    • consumers read data from kafka
    • producers and consumers are decoupled (no need to know who consumes the data to put it in)
  • kafka connect
    • framework for writing connectors for kafka
    • sources and sinks (producers and consumers)
  • kafka knows where you are at in the stream using a consumer group
  • Can reset your offsets to reload data
  • makes reloading data warehouse much easier
  • makes testing new platforms easier
  • Event Hub Cloud service – Oracle’s kafka in the cloud

Thanks to Robin Moffatt (@rmoff) for correcting the notes around schema.  He pointed out that with Kafka it is just bytes on a message, so you choose the serialization format and thus whetehr there is a schema.

OpenWorld 2017: Getting the Most out of Oracle Data Guard!

We have been working to roll out Data Guard in our Oracle environment for a few years now.   The issues have not been technical, so much as other priorities getting in the way.  This session grabbed my attention as I was interested to see other use cases (other than HA and DR) that we could solve with Data Guard.  Ludovico Caldara not only put on a great session, but he did an outstanding job of making the materials available prior to the session on his blog.

Because he did such a great job, I won’t bore you with a lot of the details.  But he shows you how to use Data Guard to perform online database migrations, use the standby for reporting, and perform database clones.  It is all really cool stuff that my team could leverage if we finish rolling our Data Guard.


OpenWorld 2017: Best Practices for Getting Started with Oracle Database In-Memory 12c

Kicked off Oracle Open World with a great session on Oracle Database In-Memory.  The speaker (xinghua wei) did a great job of presenting both the technology itself as well as use cases and things to watch out for.  My team has started to look at the future of Exadata inside of our support model.  We have loved the performance we have gotten out of the appliances, but the coordination required to organize patching with Oracle and the hyper consolidation needed to justify the additional costs of running Exadata.

The biggest push back that I get from my team (and customer) is concerns with analytic workloads moving off the Exadata.  While it may not be the solution, Oracle’s In-Memory is one tool we have that could help with those concerns.  In this session I learned some new things that I did not know about In-Memory that will help us to better understand how it might play in our environment.  First, the feature is an extra sku, and a quick search gave $23K per CPU unit as the book price.  While not exactly cheap, it sure beats the heck out buying a whole Exadata.  More notes are included below.  I’m still not sure if this is something that we will use, but it’s nice to have a better understanding of the technology as we look at options other than Exadata.


  • In-Memory stores data in Columnar Format
  • Data exists along with traditional row based tables
  • 20% impact in TPS on row-based table associated with In-memory tables
    • Can be partially mitigated by removal of indexes on the tables that were used to support OLAP workloads
    • Results will vary based on a number of factors: design, workloads, etc.
  • Works with RAC and Data Guard
  • Differentiator vs. SAP HANA, SQL Server, DB2
    • 2 formats in 1 database (row and columnar)
    • 100% application transparency

Percona Live Data Performance Conference 2016

I’m sitting in the San Fransisco airport trying to get enough caffein in me to feel alive again.  A sure sign of a great conference.  Last year I live blogged the conference but this year there was so much stuff that I wanted to absorb that I failed to make time for that.  There were a ton of great sessions.  It did feel like there were fewer folks in attendance, but that only made the learning feel more personal.  This year the folks at Percona opened up the conference to encourage content across all platforms.  I wonder if that made some of the MySQL folks worry that the conference didn’t have anything to offer them.  I can tell you that there was plenty of great MySQL content.

Working in an enterprise database administration shop, the expanded content was perfect for me.  While we have historically only supported 2 platforms, we currently support 4 (about to be 5) and are constantly being pushed to support more.  This is a natural extension of modern application design and have a conference where we can hit on several platforms really helps.  We all have tight budgets these days and its almost like having 3-4 conferences in one.  But the soul of the conference is still very much MySQL.  With content geared towards first time DBAs through sessions on the InnoDB and the new MySQL Document Store, you can’t go wrong with this conference to expand your MySQL knowledge.

I am coming home with a bunch of great ideas.  The hard part now is to pick which things to work on implementing first.  Some will be changes my team can make ourselves and others will need to involve change in other teams as well.  I was very intrigued by the “chatops” idea.  I will try to see how we can integrate the concept into our operations.  I similarly was interested in the rise of integrating chat bots into database automation.  We are hitting database automation real hard right now.  Thus far our focus has been on self service APIs, but I think we will add a natural text interface into these APIs to further improve the experience for our customers.

Well, I am out of coffee so I need to run and refill the cup.  More to come on Percona Live and all the projects that come out of it.

It’s Been Awhile

We’ll life happens, and like everyone else I’ve been busy.  I have, however, been working on some exciting stuff so I thought it was time to get back to blogging to share all the cool stuff going on.  First, I have been working in management.  This means that I have to steal cycles to do the techie stuff these days.  Second, my son has started playing travel soccer which stills most of my early evenings and a good part of my weekends.  I do love to see him compete however, so no complaints here.  All this is to say that I have had to be more deliberate in what I have been working on.

I plan for this to be the first in a series of blogs and I thought it best to go over some of the topics that have been taking up my time.  I will list out both technical and non-technical issues that I have been wrestling with and provide a little context.  My hope is this information with help me to prioritize my subsequent blog posts as well as give some context to them when they come out.


Last year, my team stepped up to the plate and went from zero to fully supporting Highly Available MySQL in about 9 months.  We spent time learning the platform, sharing knowledge and testing different configurations.  We also worked with Percona to have an independent expert come out and verify that we were on the right track.  When it was all said and done we not only have a handful of production applications on stand-alone MySQL instances and an application running on top of Galera, we also realize that MySQL provides us the ability to control data platform much better than our primary platforms (Oracle and Microsoft SQL Server).  My technical contribution to this project was providing a one day hands on Galera training for the team so they could better understand that technology.


After spending last year rolling out MySQL,  I had planned to spend this year cleaning up some remaining issues and working on standardizing our Oracle and SQL Server environments more.  Life has a way of changing my priorities.  Our company had a major initiative that required the use of MongoDB as the data store.  I had been reading up on MongoDB and had attended some local user group meetings, but as a team we were basically at ground zero as far as MongoDB was concerned.  Once again, we leaned on Percona to help us get some replica sets up in support of the aggressive project goals, and I had two of my folks work through the wonderful online training provided by MongoDB Inc.

We are still very much in the middle of this project, but so far I am excited by what I have seen.  I think that developers here are going to love somethings about MongoDB, and it will help us solve some issues we have in being more agile with our data stores.

Database as a Service

Another big initiative my team has going on right now is transitioning to a true Database as a Service provider.  We currently have a self-help website I built years ago that lets our customers spin up development and test databases, copy databases between instances, as well as deploy code without worrying about the documentation needed in the CMDB (the application handles all that).  It has served us well, but it was built without an API.  In order to integrate with other automation initiatives within the company we need to provide all this and more via an API.

We looked at Cloud Foundry and Trove as options to “buy” our way out to this.  But with all the other change going on we decided that it would be best to implement our own API and allowing others to plug in as needed.  This allows us to better control the back-end services and keep our existing investments in monitoring, backups, and other processes.  I am working with another member of my team to build a Restful API for this.  We have chosen to leverage NodeJS as our development platform.  For a “.Net” guy this is a steep learning curve, but so far I am digging it.

I’ll try to keep you up to speed on these areas.  I apologize in advance if post come at you in a seemingly random way from here on out.  I just hope to make the time to share and with all this going on the topics are bound to blend together.