Rodel E. Dagumampan


Making the world a better place one line of code at a time.

Redis, Redis Enterprise, Redis Labs & Future of Redis

At the Redis Labs workshop I had this week in Copenhagen, we had technical deep dive on Redis and Redis Enterprise. It’s great that Redis Labs made a ready-for-use docker images that we can pull and Redis instances we can SSH right away. Cool way to do workshops. I have know Redis for while I have yet to get practical experience on it, probably I’ve been drowned into data and backend services far too long already :).

Here’s some of the basics:

  • Redis is REmote DIctionary Server.
    You must be creating hash tables, value key-pairs and tuples C#, let’s make them distributed with Redis.
  • Redis is a data store, a distributed memory-persistent data store.
    Because its memory-native, it’s very fast and because it’s distributed, you can scale out to multiple machines to host your data and take advantage of full processing power each node.
  • Redis is for low-latency high throughput data flow.
    Here, I Iike the water pipe analogy. Latency is the average time required for water to travel from one end to other end of the pipe. Throughput is the amount of water that gets in/out of the pipe. In our world, water is data, and if we have better pipe (RAM) we get same amount of data for smaller amount of time.
  • Redis is a key-value NoSQL data store.
    So, sorry you can’t pull data by SQL query. Move on sql-b***h!. That’s me!
  • Redis is orginally created by Salvatorre Sanflippo and he made it open source. Good call master!
  • Redis is #1 in NoSQL database in docker hub with 630+million pulls and rated “most loved database” in 2018 Stackoverflow survey.
  • Redis is fast, but it’s not the fastest NoSQL datastore. A position have yet to claim.

Let’s me share some more bits:

  • Redis is the OSS, Redis Labs is company who maintain Redis, Redis Enterprise is a forked out version with enterprise features. Seem’s like a trend nowadays. Like RH Linux and RH Enterprise, or Elastic Search and
  • Redis is in-memory but it can support durability. Durability = Disk. Or if you’re purist and really would like to go all-in with memory, you can provision a cluster so you can prevent complete rebuild in in case of catastrophic failure. You probably wouldn’t want to rebuild entire database from scratch right?
  • Redis is scalable and can be made highly-available. You can scale out by adding new nodes and sharding your data. Clusters, Nodes and Shards are the “buzzwords” today so make sure you’re on top of this.
  • Because Redis is OSS, you can also get managed enterprise instances from other vendors. We have Azure Redis Cache, Amazon ElastiCache or maybe create your own Redis fork.

Redis modules:

Another strenghts of Redis is its modules and large community all trying to make Redis better. For full list of modules visit

  • Rate Limiter
  • RedisSearch
  • Rabloom
  • RedisML
  • ReJSON
  • Redis-Graph
  • Redies-Timeseries

Redis is competing with these NoSQL databases:

  • Memcached
  • CouchbaseDB
  • MongoDB
  • Apache Cassandra
  • Apache Kafka
  • Hazelcast

In general, Redis can be used in the follow cases:

  • High-speed caching
  • Session & state management
  • Real-time analytics
  • Message queuing via pub/sub
  • Real-time data injestor/buffer

I’m more interested in real-world practical cases so I short-listed those I find to be most relevant to me:

  • As staging datastore in website. Redis can serve as middle database between web front end and RDMS backend. In this architecture, you get performance benefit of in-memory cache and data accessibility + ACID-ty of RDMBS.
  • As full page cache. Redis holds all your website pages and index them. For example, when your user asks for TellMeMoreAboutYourservice.html, you always pickup from memory and not from disk. Redis can also invalidate pages if you need some TTL capability.
  • As message queue. Redis serves as pub/sub platform.  For example, when customer adds item into shopping cart, you may want to do handfull of things. Maybe, update session information, update site counters, reserve the sku, calculate total, taxes and discounts. We can queue this activities and let customer continue his shopping. Eventually, these actions will be executed when she checks out.
  • As buffer or data injestor to ELK. When I say ELK,I actually mean Logstash. Redis can serve as buffer pool to Logstash so you can handle some event spikes. These event spikes could bug down the ES indexer and potentially blocking other logs.
  • For complete list, please refer to my references section. I find 99% of the cases are for improving the web and mobile experience.

Who are the general audience of Redis?
Developers. To me this just made Redis a NO to machine learning. IMO, I would like our data scientists to focus on crunching data and building models rather than understanding how to pull out and project data in useful structure.

The Redis Manifesto is to reduce complexity. True enough, it tries to do as little as possible so your execution is fast but actually it just pushes back the complexity to the application or data users.

Kafka just introduced KSQL, will Redis follow through?
So far No. I have asked this to Redis Labs and they have no immediate plans.

When do you go with Redis Enterprise

  • When you reached the limits of your network and you need the “**ties” in your instances. These are high-availability, geo-redundancy, security etc.
  • When Disaster Recovery is a primary driver.
  • When governance dropped the axes and demands an Enterprise SLA.

Future: Redis Streams

Redis is taking on Kafka and it’s going to be exciting. Today, we would like to receive data as close to real-time as possible and we should be able to correlate data as they arrive in real-time. The process is called Windowing. This is already offered by Kafka Stream and Spark Streaming. But these streaming systems are disk-based and lot more complex and costly to operate. My first deep dive with Kafka wasn’t very pleasant. F** the zookeeper.

Redis Streams is pub/sub platform with snapshot support; just like kafka. Redis Labs says its not in GA, it will be soon. Most likely, it will be included in Redis 5 release.


Redis Manifesto

Redis Cluster Architecture Use Cases

Click to access 15-Reasons-Caching-is-best-with-Redis-RedisLabs-1.pdf

Redis Recognitions

Redis Performance review

Redis Competitors

Redis and SQL Server
Using Redis with SQL Server

Redis Modules

Redis Implementation Matrix

Redis Enterprise by Redis Labs

Azure Redis Cache by Microsoft

Elasticache by Amazon

Filed under: technology, , , ,

LEAP DAY 2: DevOps and SRE at VSTS Team

DevOps and SRE at VSTS Team
Sam Guckenheimer, PO for VSTS Team

Sam shares his team’s experiences in building and delivering Visual Studio Team Services and shows their internal dev’t process at VSTS DevLabs. It’s interesting to see their actual team’s backlog, issue tracking and reporting dashboards. And it’s not very different from ours at F5@Ørsted. Coolness! What’s most striking, VSTS dev team runs 69k unit tests in 29 minutes! Ok, that something to beat. We’re no way near this stats.


Sam also talks about metrics and what team don’t watch for, and I fully agree to most of them. The team pay little to no attention to:

  • original estimates
  • completed hours
  • lines of code
  • team capacity
  • team burn down

Site Reliability Engineering (SRE)
Another key take-away is the rise of Site Reliability Engineering (SRE).  In a typical enterprise, the dev teams works on new projects while having little reserved capacity for operations, incidents and continuous improvement. While this ensure that we continue to build new and exciting stuff, production systems’ load is increasing, performance is degrading, databases are getting fragmented, indexes are requiring rebuild and new vulnerabilities reported and needs to be fixed. These are often overlooked and only when customers reported issues that it gets the attention.

To continue the story… So the team delivered new service to production, the business starts using it, the PM celebrates wheeewww, and everyone is happy.  Then the team moves to another project. Sounds good so far? Well, the biggest casualty of this model is innovation. Software services needs to mature overtime. I always believe the best version will always be the odd numbers v3, v5, v7. If a service never reaches these version, it just matter of time before it blows up.

An SRE capability could be the answer to the long standing battle Dev vs Ops. An SRE engineer’s goal is to keep the services in tip-top condition and drive innovation to the service. An SRE must be able to write code in C# or Powershell, kick the CI/CD pipeline, optimize application servers, monitor and optimize databases, perform chaos engineering tasks. I know it may overlap the role of servers admins and DBAs but I believe it has been long overdue. We need to rethink the way we operate production systems.

At Google, SREs are made of 50% engineers and 50% administrators committed to improve features and operational environment. But we are not Google, nor we have Google scale. But I we should give this a try #JFTI.

Is SRE Devops? IMO, DevOps is a culture while SRE is a capability with very defined role. A team can embrace DevOps better by hiring SRE engineer.

Action Items

  • Try-out SRE role for new hire in the team (dev/ops) role


  • Thanks to fell LEAP attendee @henrihie for the picture




Filed under: Uncategorized

LEAP DAY 2: Azure Strategy

Azure Strategy
Ulrich Homan, Distinguished Architect
Cloud and Enterprise Engineering

Ulrich shares some very interesting statistics. Above all else is the transformation and embrace of Microsoft into open-source community. As it is today, MS have largest contribution to GitHub, they have joined Linux Foundation, supports SQL Server in Linux and … hire the brightest people from OSS community.

The State of Azure Today

  • 1.6 million cores provisioned in q4
  • 2018 energy from re, 60% in 2020, long term 100% renewables
  • 72+ tb/s backbone
  • 100 data centers
  • 42 regions
  • G15 will get its own data centers
  • SQL Server and app server will be on different clusters
  • target 30gb/sec throughput
  • SONiC – Containarized, platform agnostic, with Switch OS
  • 60% of load are linux-based

Open Compute Project
The OCP is an open community for bringing the common hyper-scale data center architectures into on-premise environment. I think it’s a very good initiative that these experiences are shared with thousands infra professionals who have yet to move to public cloud.

  • Started by FB
  • Open compute: an opensource design for cloud infra
  • Hyper scale deployment, n-million of servers, cores, hardware
  • VM scalesets -> group of master vm templates


CoCo framework
Confidential Computing (CoCo) Framework is an enterprise adoption of blockchain technology designed for the enterprise. It’s open source and it promise to accelerate adoption of blockchain in enterprise community.

  • secure data while in use
  • unobservable & tamperproof
  • data is completely protected
  • data is always encrypted
  • challenges:  – data is decrypted when loaded in RAM
  • decryption will be in CPU, with Intel cert chain


Filed under: Uncategorized

LEAP DAY 1: Azure Messaging

Cross Platform – open source commitment
Scott Hunter, Director

Scott shared the many different things at Microsoft that are open source. Primarily the vNext features of .NET Core and ASP.NET Core. Core promises to work on both Windows and Linux platforms. There is strong push right now to make .NET Core applications to be microservice ready. That is, small, faster to build, containerized, monitoring-built in.


.NET Core 2.0

  • faster build
  • GDPR
  • microservices and azure
  • faster internal engineering system

.NET Core 2.1 (vNext)

  • Span<T>
  • Tensor<T>
  • Sockets
  • Smaller install size
  • 10x client performance

ASP.NET Core 2.1 (vNext)

  • SignalR
  • HTTPS by default
  • GDPR
  • IHttpClientFactory (caching, retry logic)
  • 6x throughput on in-proc hosting
  • Identity
  • Webhooks

Action Items

  • Try out .NET Core 2.1 + SQL Server + Windows Container + Service Fabric


Azure Messaging
Dan Rosanova

It was late and quite heavy so I hardly capture much out of this session. But the key takeway here is that choosing the right messaging architecture must be drive by these three questions:

  1. What are you doing
  2. What you care about
  3. What are you willing to give up to get it

It make perfect sense. It you want ultra-low latency then you may have to give-up atomicity or absolute consistency. The consistency requirements for financial transaction is different from telemetry, log or data streaming. Again, it’s what we care about and what are we willing to give up.

Dan shared interesting stats from Azure, pretty impressive

  • 1.2 trillion requests/day
  • 2 million topics in prod
  • 99.9998% success rate
  • > 30 PB monthly data volume
  • 42 regions
  • 750 billion messages on Azuer Service Bus


Filed under: Uncategorized

LEAP DAY 1: Blockchain

Matthew Kerner, GM Blockchain

Blockchain is not BitCoin. BitCoin is a Blockchain. Block is a group of transactions. Blockchain is a group connected blocks. A blockchain is trusted if many many computers sees the same blockchain as untampered. The more computers sees the same, the more trusted the blockchain is.  The transaction completed and the owner of computer gets paid for the service. Done, no middleman.

Blockchain is a distributed ledger where no single entity exclusively perform settlement or clearance of economic transactions. This is made possible by cryptography and distributed computers performing calculation and verification of transactions. If these massively distributed machines reached a consensus on the integrity of the transaction, the transaction is cleared and settlement is completed. Very promising.

IMO, the principles of any financial instrument (gold, cash, equities, etc) has not changed ever since. It’s based on TRUST. A trust that a bucket of fruits has the same value as piece of gold, amber, paper, bit coin or smart contract. This trust is fundamental for financial transaction based on blockchain to make it mainstream. It’s exciting where it would take us.

Action items

  • Dissect a blockchain algorithm


Filed under: Uncategorized

LEAP DAY 1: Rising Cloud Trends

Rising Cloud Trends
Mark Rushinovich, Azure CTO

Mark had broad piece here, some bits from history, the current state of affairs in Azure and initiatives at Microsoft such as Blockchain, Deep Neural Networks, IoT Edge, and Quantum comptuting. Some of my key take-ways:

Intelligent Egde
Edge is contra centralized. It means moving the intelligence from central units like data centers or central data store into the data prodocing units like micro computers, instrumentation equipment and devices. Ex, we put the intelligence into the autonomouos/driver-less car instead of submitting data into central server so the case of fault the card should be able to react immediately. The benefit is ultra-low latency and local real-time analytics. As the accessibility of IoT devices continue to grow and price keeps to fall,
this is very promising.

Serveless is an interesting concept though I have yet to hear reference cases in the enterprise. In the last QCON, a fellow attendee shared with us that they use serverless to process images of car plates. IMO, serverless, like most microsservices best fit with stateless operations, more like Input-Process-Output.

  • an abstraction of servers, zero management
  • event driven, instance scale
  • micro-billing, pay for the compute units used
  • use cases: thumbnail generator, plate scanners, server jobs
  • reduce devops overhead
  • faster time-to-market

I don’t fully understand stuff like quantum entanglement but what I know for sure, it changes the way we understood how computers work. Probobly the most interesting is the soon we’ll have new language to learn, Q#. f***, i havent fully exercised F# and now we have Q###??? AND its also a different paradigm! From procedural -> object oriented -> functional -> quants. Our field of work has never been this exciting!

Action Items

  • Explore blockchain and smart contracts architecture


Filed under: Uncategorized

LEAP DAY 1: Azure SQL Database Elastic Pool

This year, I had the opportunity to participate in annual Microsoft LEAP Nordic Program held in Microsoft HQ in Redmond, Washington, USA. LEAP is 5-day intensive program tailor-made for architects and developers fron Nordic countries. My colleagues tells me it has holistic approach and we could get to meet the best and brightest at Redmond.

In this post, I will try to wrap-up my learning on my basic understanding of what has been presented.

Azure SQL Database Multi-tanency
Neeraj Joshi, Program Manager

Neeraj talks about the various design patterns and solutions for building multi-tenant databases in Azure. While my company Ørsted is not an ISV, a multi-tenant setup or an elastic pool might be possible solution in our various availability & availability challenges.

We can set-up three different solutions

  • SQL Server on a VM instance
  • Single Azure SQL database
  • Elastic Pools (Pool of Azure SQL Databases)

An SQL Server on VM is PaaS and is best fit if you want maximum control over resources and management of SQL Server. We have full control on the instance’ RAM, CPU and perform fit-purpose optimization. IMO, this is expensive but quick-step for moving on-prem DB into cloud.

A Single Database approach is SaaS where we created a database on an environment possibly shared with other customers in Azure. While we don’t have full-control, we get the benefit from all Azure-native matrix and management features. It’s managed by Microsoft engineers and we only have to be concern with choosing the right service tier, performance level and storage. This seems to be likely unpopular to internal DBAs.

An Elastic Database Pool is collection of single databases that can automatically expand on peak times and contracts when demand fall. I think the primary driver is operational budget and unpredictable demands in the databases. Pay only for what we use, scale up when demand peaks.

What’s so cool about Elastic Pool?

Let say we have 4 pizza shops and each have a fleet of 2 motorcycles. During peak times, P1 shop may need more riders than others but he can’t because he only have 2 bikes. While the rest of 6 bikes sit idle in other shops, P1 is sad because he can only commit to deliver 2. The other shops are also sad because they pay hourly salary of their riders while they sit idle.

Elastic Fleet. Let’s make a fleet of 8 bikes and dispatch them based on just-in-time demand. In this case, P1 will get 4 riders to deliver all his orders and pay only for each service. And the other shops don’t have to pay anything.

This is a very efficient way of utilizing resources. Resource is CPU, RAM, and compute hours in getting data from RAM into disk. And in Azure, resource is money.

Action Items

  • RFC for Azure SQL Data Warehouse
  • POC on Elastic Database Pool


Filed under: Uncategorized

Arnis: Evaluating cloud provider free tiers

This Saurday, I was looking cloud providers to host Arnis’s REST services. I believe that all providers are built on the same cloud principles and offers basic services for microservice hosting. So, my selection criteria are very simple: it must be reliable, easy to start with, and free as beer 😉 as I wish not to spend much personal money on it!

Based on this, I shortlisted 4.

  • AWS (market leader and very popular with OS community, an Amazon company)
  • Azure (popular with large enterprises like my employer, a Microsoft company)
  • Google Cloud Compute (its Google.)
  • Heroku (heard alot about it, a Salesforce company)

*** This is not a feature by feature comparison since each offers a unique package and spices. I noted only those that fits for my requirements.

Amazon Free Tier

Additional spices:

  • dynamodb nosqldb
    • 25gb storage -> this is niiiiice!!
    • free until full utilization
  • docker container register
    • 500mb/mont -> sweetness!
    • expires 12 months

Azure F1

  • *free forever
  • 60 CPU minutes/day – this is fine for low traffic sites
  • 10 applications
  • 1GB RAM, 1GB storage
  • requires credit card
  • no bill shock

Additional spices:

  • *free forever for students, professors and researchers via Dreamspark

Azure via Visual Studio Dev Essentials

  • expires 12 months
  • this Azure F1 + $25 monthly azure credit
  • plus++ lots of other stuff i don’t need
  • for new azure users
  • no bill shock

Google Cloud Compute

  • expires 2 months -> such a cheapskate huh!!!
  • $300 credit free
  • requires credit card
  • no bill shock

Heroku Free

  • *free forever
  • sleeps after 30mins of inactivity
  • usable only 18 hours (requires to sleep 6 hours/day)
  • 512 RAM

Additional spices:

  • 25MB Redis RAM, 20 connections -> this can be a deal breaker!

The Verdict

Three important leanings.

  1. There’s no such things as “free forever”, it’s should be “free for now”. These companies changes the scope of free over time, ex. Heroku’s free was limited and they now offers a “Hobby” tier for $7/month. And Google have to change their tier, because it’s just doesn’t make sense.
  2. I can get the best of everything by combining their Value Added Services (the spices). I can get my app hosted in Azure for unlimited time, take the AWS’s 25GB of NoSQL database and optimize later with Heroku’s Redis Cache!
  3. I have strong distrust of Amazon and it always push this option down. (It happens when Amazon Prime scam sneakingly charged me $79 without prior notice. It checks any card I have on my account and charge whatever if sees as possible money source! One, called this “cynical corporate rape”  and was banned by watchdog)

And Google? Told you, it’s non-sense offer 😀

So I started with Azure. Simply, I can get things done fast with Azure.
I decided based on my primary selection criteria + the leverage on my strengths in .NET + and gut feel.


Filed under: architecture, arnis, cloud computing, , , , , , ,

Arnis: Choosing technology stack for building Arnis.Web

These recent early mornings, I have been digging around the right technology stack for building Arnis.Web. Btw, Arnis.Web would be the web based near-real time tracking of project dependencies using Arnis API. It’s basically, so you don’t have to look at notepad every time.

My primary selection criteria:

  • must be free as beer
  • open source, lovin it
  • cloud-compatible
  • easy to learn, i don’t have much time to dig everything
  • easy to provision, deploy fast, fail fast

Nice to haves:

  • friendly with Docker
  • possibly CI with Appveyor

General components would be:

  • nosql db
  • web api
  • web ui framwork
  • javascript framework
  • web server
  • cloud platform

After reading through,  I have shortlisted these alternatives. Check out later to see what  I ended up using.

  • nosql db (MongoDB, Redis)
  • web api (NodeJS, ASP.NET 5/Core)
  • web ui (ASP.NET MVC, Bootstrap)
  • javascript framework (angularJS, jQuery)
  • web server (NodeJS, Apache)
  • cloud platform (Azure Dev Essentials, AWS Free Tier)

Filed under: architecture, architecture & governance, , ,

Arnis: A no-brainer dependency tracker for .NET solutions

For for 10+ years of delivering software, I have attended these kind of questions many times?

  • What does it take migrate all projects into new Visual Studio IDE?
  • What are the different O/R mapping tools we used?
  • What mocking frameworks do we used in uni tests?
  • What open source tools did our company used? Legal would like to know!

In most cases I just have to do quick file search on branch folder, look at the project files, and at at some point I have written some dirty code to search. But not this time!! Not when we have to do this over 100+ solutions, with hundreds of projects and possibly thousands of dependencies. So while my wife was cooking, I have written a simple dependencies tracker of NET solutions. The project is available on GitHub as project Arnis.

Arnis is a no-brainer dependency tracker for .NET applications using elementary parsing algorithm.

At the moment, you can:

  • track applications built on Visual Studio from 2001 to 2015.
  • track target framework versions
  • track referenced assemblies from nuget packages and GAC/Referenced Assesmblies folder.
  • extensible to support new trackers and sinks.

How to use:

c:\arnis /wf:"<your_workspace_folder>" /sf:"<your_desired_csv_file>" /skf:<skip_these_folders>

Example (simple):

c:\arnis /wf:"c:\github\arnis" /sf:"c:\stackreport.arnis.csv"

Example (with skip file):

c:\arnis /wf:"c:\github\arnis" /sf:"c:\stackreport.arnis.csv" /skf:"c:\skip.txt"

where skip.txt contains

How it looks:


How it works:

Trackers scans your target workspace folder and perform analysis of solutions and projects. Then the tracker’s results are consolidated to form a dependency tree . Sinks saves the result into specific format or destination. Currently, only CSV file format is supported.


Arnis cannot guarantee 100% reliability. This is not runtime dependency tracer. If you need more sophisticated runtime analysis I recommend Dependency Walker, ILSPy, NDepend  or Reflector  tools.

Next steps:

  • support web projects
  • create webapi sink so i can do automated analysis and reporting

By consistently monitoring the technology stack in our solutions portfolio  we can better plan for component upgrades, monitor 3rd party usage and licenses, consolidate component versions, and strategize decommissioning of projects and tools.

I am very excited with this pet project 😉
Feel free to fork out, refactor or build new sinks.

Filed under: architecture & governance, developing software, , ,

I am Rodel E. Dagumampan, a software architect from the Philippines building strategic projects in the renewable energy industry. Sharing here my random thoughts on software delivery, economics, politics, and life experiences.