Rodel E. Dagumampan


Making the world a better place one line of code at a time.

LEAP DAY 2: DevOps and SRE at VSTS Team

DevOps and SRE at VSTS Team
Sam Guckenheimer, PO for VSTS Team

Sam shares his team’s experiences in building and delivering Visual Studio Team Services and shows their internal dev’t process at VSTS DevLabs. It’s interesting to see their actual team’s backlog, issue tracking and reporting dashboards. And it’s not very different from ours at F5@Ørsted. Coolness! What’s most striking, VSTS dev team runs 69k unit tests in 29 minutes! Ok, that something to beat. We’re no way near this stats.


Sam also talks about metrics and what team don’t watch for, and I fully agree to most of them. The team pay little to no attention to:

  • original estimates
  • completed hours
  • lines of code
  • team capacity
  • team burn down

Site Reliability Engineering (SRE)
Another key take-away is the rise of Site Reliability Engineering (SRE).  In a typical enterprise, the dev teams works on new projects while having little reserved capacity for operations, incidents and continuous improvement. While this ensure that we continue to build new and exciting stuff, production systems’ load is increasing, performance is degrading, databases are getting fragmented, indexes are requiring rebuild and new vulnerabilities reported and needs to be fixed. These are often overlooked and only when customers reported issues that it gets the attention.

To continue the story… So the team delivered new service to production, the business starts using it, the PM celebrates wheeewww, and everyone is happy.  Then the team moves to another project. Sounds good so far? Well, the biggest casualty of this model is innovation. Software services needs to mature overtime. I always believe the best version will always be the odd numbers v3, v5, v7. If a service never reaches these version, it just matter of time before it blows up.

An SRE capability could be the answer to the long standing battle Dev vs Ops. An SRE engineer’s goal is to keep the services in tip-top condition and drive innovation to the service. An SRE must be able to write code in C# or Powershell, kick the CI/CD pipeline, optimize application servers, monitor and optimize databases, perform chaos engineering tasks. I know it may overlap the role of servers admins and DBAs but I believe it has been long overdue. We need to rethink the way we operate production systems.

At Google, SREs are made of 50% engineers and 50% administrators committed to improve features and operational environment. But we are not Google, nor we have Google scale. But I we should give this a try #JFTI.

Is SRE Devops? IMO, DevOps is a culture while SRE is a capability with very defined role. A team can embrace DevOps better by hiring SRE engineer.

Action Items

  • Try-out SRE role for new hire in the team (dev/ops) role


  • Thanks to fell LEAP attendee @henrihie for the picture





Filed under: Uncategorized

LEAP DAY 2: Azure Strategy

Azure Strategy
Ulrich Homan, Distinguished Architect
Cloud and Enterprise Engineering

Ulrich shares some very interesting statistics. Above all else is the transformation and embrace of Microsoft into open-source community. As it is today, MS have largest contribution to GitHub, they have joined Linux Foundation, supports SQL Server in Linux and … hire the brightest people from OSS community.

The State of Azure Today

  • 1.6 million cores provisioned in q4
  • 2018 energy from re, 60% in 2020, long term 100% renewables
  • 72+ tb/s backbone
  • 100 data centers
  • 42 regions
  • G15 will get its own data centers
  • SQL Server and app server will be on different clusters
  • target 30gb/sec throughput
  • SONiC – Containarized, platform agnostic, with Switch OS
  • 60% of load are linux-based

Open Compute Project
The OCP is an open community for bringing the common hyper-scale data center architectures into on-premise environment. I think it’s a very good initiative that these experiences are shared with thousands infra professionals who have yet to move to public cloud.

  • Started by FB
  • Open compute: an opensource design for cloud infra
  • Hyper scale deployment, n-million of servers, cores, hardware
  • VM scalesets -> group of master vm templates


CoCo framework
Confidential Computing (CoCo) Framework is an enterprise adoption of blockchain technology designed for the enterprise. It’s open source and it promise to accelerate adoption of blockchain in enterprise community.

  • secure data while in use
  • unobservable & tamperproof
  • data is completely protected
  • data is always encrypted
  • challenges:  – data is decrypted when loaded in RAM
  • decryption will be in CPU, with Intel cert chain


Filed under: Uncategorized

LEAP DAY 1: Azure Messaging

Cross Platform – open source commitment
Scott Hunter, Director

Scott shared the many different things at Microsoft that are open source. Primarily the vNext features of .NET Core and ASP.NET Core. Core promises to work on both Windows and Linux platforms. There is strong push right now to make .NET Core applications to be microservice ready. That is, small, faster to build, containerized, monitoring-built in.


.NET Core 2.0

  • faster build
  • GDPR
  • microservices and azure
  • faster internal engineering system

.NET Core 2.1 (vNext)

  • Span<T>
  • Tensor<T>
  • Sockets
  • Smaller install size
  • 10x client performance

ASP.NET Core 2.1 (vNext)

  • SignalR
  • HTTPS by default
  • GDPR
  • IHttpClientFactory (caching, retry logic)
  • 6x throughput on in-proc hosting
  • Identity
  • Webhooks

Action Items

  • Try out .NET Core 2.1 + SQL Server + Windows Container + Service Fabric


Azure Messaging
Dan Rosanova

It was late and quite heavy so I hardly capture much out of this session. But the key takeway here is that choosing the right messaging architecture must be drive by these three questions:

  1. What are you doing
  2. What you care about
  3. What are you willing to give up to get it

It make perfect sense. It you want ultra-low latency then you may have to give-up atomicity or absolute consistency. The consistency requirements for financial transaction is different from telemetry, log or data streaming. Again, it’s what we care about and what are we willing to give up.

Dan shared interesting stats from Azure, pretty impressive

  • 1.2 trillion requests/day
  • 2 million topics in prod
  • 99.9998% success rate
  • > 30 PB monthly data volume
  • 42 regions
  • 750 billion messages on Azuer Service Bus


Filed under: Uncategorized

LEAP DAY 1: Blockchain

Matthew Kerner, GM Blockchain

Blockchain is not BitCoin. BitCoin is a Blockchain. Block is a group of transactions. Blockchain is a group connected blocks. A blockchain is trusted if many many computers sees the same blockchain as untampered. The more computers sees the same, the more trusted the blockchain is.  The transaction completed and the owner of computer gets paid for the service. Done, no middleman.

Blockchain is a distributed ledger where no single entity exclusively perform settlement or clearance of economic transactions. This is made possible by cryptography and distributed computers performing calculation and verification of transactions. If these massively distributed machines reached a consensus on the integrity of the transaction, the transaction is cleared and settlement is completed. Very promising.

IMO, the principles of any financial instrument (gold, cash, equities, etc) has not changed ever since. It’s based on TRUST. A trust that a bucket of fruits has the same value as piece of gold, amber, paper, bit coin or smart contract. This trust is fundamental for financial transaction based on blockchain to make it mainstream. It’s exciting where it would take us.

Action items

  • Dissect a blockchain algorithm


Filed under: Uncategorized

LEAP DAY 1: Rising Cloud Trends

Rising Cloud Trends
Mark Rushinovich, Azure CTO

Mark had broad piece here, some bits from history, the current state of affairs in Azure and initiatives at Microsoft such as Blockchain, Deep Neural Networks, IoT Edge, and Quantum comptuting. Some of my key take-ways:

Intelligent Egde
Edge is contra centralized. It means moving the intelligence from central units like data centers or central data store into the data prodocing units like micro computers, instrumentation equipment and devices. Ex, we put the intelligence into the autonomouos/driver-less car instead of submitting data into central server so the case of fault the card should be able to react immediately. The benefit is ultra-low latency and local real-time analytics. As the accessibility of IoT devices continue to grow and price keeps to fall,
this is very promising.

Serveless is an interesting concept though I have yet to hear reference cases in the enterprise. In the last QCON, a fellow attendee shared with us that they use serverless to process images of car plates. IMO, serverless, like most microsservices best fit with stateless operations, more like Input-Process-Output.

  • an abstraction of servers, zero management
  • event driven, instance scale
  • micro-billing, pay for the compute units used
  • use cases: thumbnail generator, plate scanners, server jobs
  • reduce devops overhead
  • faster time-to-market

I don’t fully understand stuff like quantum entanglement but what I know for sure, it changes the way we understood how computers work. Probobly the most interesting is the soon we’ll have new language to learn, Q#. f***, i havent fully exercised F# and now we have Q###??? AND its also a different paradigm! From procedural -> object oriented -> functional -> quants. Our field of work has never been this exciting!

Action Items

  • Explore blockchain and smart contracts architecture


Filed under: Uncategorized

LEAP DAY 1: Azure SQL Database Elastic Pool

This year, I had the opportunity to participate in annual Microsoft LEAP Nordic Program held in Microsoft HQ in Redmond, Washington, USA. LEAP is 5-day intensive program tailor-made for architects and developers fron Nordic countries. My colleagues tells me it has holistic approach and we could get to meet the best and brightest at Redmond.

In this post, I will try to wrap-up my learning on my basic understanding of what has been presented.

Azure SQL Database Multi-tanency
Neeraj Joshi, Program Manager

Neeraj talks about the various design patterns and solutions for building multi-tenant databases in Azure. While my company Ørsted is not an ISV, a multi-tenant setup or an elastic pool might be possible solution in our various availability & availability challenges.

We can set-up three different solutions

  • SQL Server on a VM instance
  • Single Azure SQL database
  • Elastic Pools (Pool of Azure SQL Databases)

An SQL Server on VM is PaaS and is best fit if you want maximum control over resources and management of SQL Server. We have full control on the instance’ RAM, CPU and perform fit-purpose optimization. IMO, this is expensive but quick-step for moving on-prem DB into cloud.

A Single Database approach is SaaS where we created a database on an environment possibly shared with other customers in Azure. While we don’t have full-control, we get the benefit from all Azure-native matrix and management features. It’s managed by Microsoft engineers and we only have to be concern with choosing the right service tier, performance level and storage. This seems to be likely unpopular to internal DBAs.

An Elastic Database Pool is collection of single databases that can automatically expand on peak times and contracts when demand fall. I think the primary driver is operational budget and unpredictable demands in the databases. Pay only for what we use, scale up when demand peaks.

What’s so cool about Elastic Pool?

Let say we have 4 pizza shops and each have a fleet of 2 motorcycles. During peak times, P1 shop may need more riders than others but he can’t because he only have 2 bikes. While the rest of 6 bikes sit idle in other shops, P1 is sad because he can only commit to deliver 2. The other shops are also sad because they pay hourly salary of their riders while they sit idle.

Elastic Fleet. Let’s make a fleet of 8 bikes and dispatch them based on just-in-time demand. In this case, P1 will get 4 riders to deliver all his orders and pay only for each service. And the other shops don’t have to pay anything.

This is a very efficient way of utilizing resources. Resource is CPU, RAM, and compute hours in getting data from RAM into disk. And in Azure, resource is money.

Action Items

  • RFC for Azure SQL Data Warehouse
  • POC on Elastic Database Pool


Filed under: Uncategorized

Arnis: Evaluating cloud provider free tiers

This Saurday, I was looking cloud providers to host Arnis’s REST services. I believe that all providers are built on the same cloud principles and offers basic services for microservice hosting. So, my selection criteria are very simple: it must be reliable, easy to start with, and free as beer 😉 as I wish not to spend much personal money on it!

Based on this, I shortlisted 4.

  • AWS (market leader and very popular with OS community, an Amazon company)
  • Azure (popular with large enterprises like my employer, a Microsoft company)
  • Google Cloud Compute (its Google.)
  • Heroku (heard alot about it, a Salesforce company)

*** This is not a feature by feature comparison since each offers a unique package and spices. I noted only those that fits for my requirements.

Amazon Free Tier

Additional spices:

  • dynamodb nosqldb
    • 25gb storage -> this is niiiiice!!
    • free until full utilization
  • docker container register
    • 500mb/mont -> sweetness!
    • expires 12 months

Azure F1

  • *free forever
  • 60 CPU minutes/day – this is fine for low traffic sites
  • 10 applications
  • 1GB RAM, 1GB storage
  • requires credit card
  • no bill shock

Additional spices:

  • *free forever for students, professors and researchers via Dreamspark

Azure via Visual Studio Dev Essentials

  • expires 12 months
  • this Azure F1 + $25 monthly azure credit
  • plus++ lots of other stuff i don’t need
  • for new azure users
  • no bill shock

Google Cloud Compute

  • expires 2 months -> such a cheapskate huh!!!
  • $300 credit free
  • requires credit card
  • no bill shock

Heroku Free

  • *free forever
  • sleeps after 30mins of inactivity
  • usable only 18 hours (requires to sleep 6 hours/day)
  • 512 RAM

Additional spices:

  • 25MB Redis RAM, 20 connections -> this can be a deal breaker!

The Verdict

Three important leanings.

  1. There’s no such things as “free forever”, it’s should be “free for now”. These companies changes the scope of free over time, ex. Heroku’s free was limited and they now offers a “Hobby” tier for $7/month. And Google have to change their tier, because it’s just doesn’t make sense.
  2. I can get the best of everything by combining their Value Added Services (the spices). I can get my app hosted in Azure for unlimited time, take the AWS’s 25GB of NoSQL database and optimize later with Heroku’s Redis Cache!
  3. I have strong distrust of Amazon and it always push this option down. (It happens when Amazon Prime scam sneakingly charged me $79 without prior notice. It checks any card I have on my account and charge whatever if sees as possible money source! One, called this “cynical corporate rape”  and was banned by watchdog)

And Google? Told you, it’s non-sense offer 😀

So I started with Azure. Simply, I can get things done fast with Azure.
I decided based on my primary selection criteria + the leverage on my strengths in .NET + and gut feel.


Filed under: architecture, arnis, cloud computing, , , , , , ,

Arnis: Choosing technology stack for building Arnis.Web

These recent early mornings, I have been digging around the right technology stack for building Arnis.Web. Btw, Arnis.Web would be the web based near-real time tracking of project dependencies using Arnis API. It’s basically, so you don’t have to look at notepad every time.

My primary selection criteria:

  • must be free as beer
  • open source, lovin it
  • cloud-compatible
  • easy to learn, i don’t have much time to dig everything
  • easy to provision, deploy fast, fail fast

Nice to haves:

  • friendly with Docker
  • possibly CI with Appveyor

General components would be:

  • nosql db
  • web api
  • web ui framwork
  • javascript framework
  • web server
  • cloud platform

After reading through,  I have shortlisted these alternatives. Check out later to see what  I ended up using.

  • nosql db (MongoDB, Redis)
  • web api (NodeJS, ASP.NET 5/Core)
  • web ui (ASP.NET MVC, Bootstrap)
  • javascript framework (angularJS, jQuery)
  • web server (NodeJS, Apache)
  • cloud platform (Azure Dev Essentials, AWS Free Tier)

Filed under: architecture, architecture & governance, , ,

Arnis: A no-brainer dependency tracker for .NET solutions

For for 10+ years of delivering software, I have attended these kind of questions many times?

  • What does it take migrate all projects into new Visual Studio IDE?
  • What are the different O/R mapping tools we used?
  • What mocking frameworks do we used in uni tests?
  • What open source tools did our company used? Legal would like to know!

In most cases I just have to do quick file search on branch folder, look at the project files, and at at some point I have written some dirty code to search. But not this time!! Not when we have to do this over 100+ solutions, with hundreds of projects and possibly thousands of dependencies. So while my wife was cooking, I have written a simple dependencies tracker of NET solutions. The project is available on GitHub as project Arnis.

Arnis is a no-brainer dependency tracker for .NET applications using elementary parsing algorithm.

At the moment, you can:

  • track applications built on Visual Studio from 2001 to 2015.
  • track target framework versions
  • track referenced assemblies from nuget packages and GAC/Referenced Assesmblies folder.
  • extensible to support new trackers and sinks.

How to use:

c:\arnis /wf:"<your_workspace_folder>" /sf:"<your_desired_csv_file>" /skf:<skip_these_folders>

Example (simple):

c:\arnis /wf:"c:\github\arnis" /sf:"c:\stackreport.arnis.csv"

Example (with skip file):

c:\arnis /wf:"c:\github\arnis" /sf:"c:\stackreport.arnis.csv" /skf:"c:\skip.txt"

where skip.txt contains

How it looks:


How it works:

Trackers scans your target workspace folder and perform analysis of solutions and projects. Then the tracker’s results are consolidated to form a dependency tree . Sinks saves the result into specific format or destination. Currently, only CSV file format is supported.


Arnis cannot guarantee 100% reliability. This is not runtime dependency tracer. If you need more sophisticated runtime analysis I recommend Dependency Walker, ILSPy, NDepend  or Reflector  tools.

Next steps:

  • support web projects
  • create webapi sink so i can do automated analysis and reporting

By consistently monitoring the technology stack in our solutions portfolio  we can better plan for component upgrades, monitor 3rd party usage and licenses, consolidate component versions, and strategize decommissioning of projects and tools.

I am very excited with this pet project 😉
Feel free to fork out, refactor or build new sinks.

Filed under: architecture & governance, developing software, , ,

Building an Enterprise .NET Applications Portfolio

At work, our main objectives in the .NET Architecture & Governance group is to manage risks and reduce the complexity  in the enterprise. We can achieve this thru due diligence, compliance, and informed decision making. To make an informed decision, we need to know where we are and what we currently have.

What we need is a comprehensive Application Portfolio (AP) where we can quickly get the application’s basic information, technology stack, architecture and operation attributes. An AP can be very useful:

  • When we need to estimate and kick-off a new project
  • When we need an internal reference architecture to serve as baseline architecture
  • When we need to tap an SME or experienced colleagues
  • When we need to roll out a critical release of components used in several applications
  • When we need to align applications according to Enterprise Architecture (EA) strategy
  • When a component or technology is threaten to be obsolete or unsupported like Silverlight or Flash
  • When we simply wanted to decommission apps because they became too costly to keep

Getting started
For start, we defined our roadmap:

  • Phase 1: Application Information.
    Captures the basic attributes of the solution.
  • Phase 2: Technology Stack
    Captures the entire dependency tree of the solutions such as platforms, data store, communication, security, testing practices, nuget packages, architecture styles.
  • Phase 3: Maturity Analysis
    Analyze the solution based of multiple lifecycle factors
  • Phase 4: Automation
    Research tool for easier accessibility of the AP, automating analysis and dependency tracking

For Phase 1, we defined the ff application’s Basic Information (I have excluded the aux fields).
This is just made in Excel Online document, shared across teams.

  • Common Name (CN)
  • Full Name (FN)
  • Description (DESC)
  • Business Area (BA)
  • Business Responsible (BR)
  • Technical Contacts (TC)
  • Product Owner (PO)
  • Scrum Team (ST)
  • Scrum Master (SM)
  • Lead Architect (LA)
  • Solution Architect (SA)
  • Release Manager (RM)
  • Infrastructure Delivery Manager (IDM)
  • Workspace (WS)
  • Source Code Location (SCL)
  • Solution Architecture Documentation (SAD)
  • Solution Infrastructure Documentation (SID)


When we started in Oct 2015, we have 130+ applications in our portfolio and it can be challenge get a strong commitment from different teams to collect the right information. I must admit, it’s not the most exciting task to scour those docs, chase POs and SMEs on top of their tight deliveries. It’s also a task in itself to keep this updated when key persons leave the company.

Early benefits
We are so close to finish, and I hope to consolidate all teams into single AP this Q1 2016. We have seen early benefits when we have to answer where things are. We identified what solutions have not moved to new source code repository. We have identified solutions already decommissioned, those without any activity anymore, and those that didn’t reach production.

Next steps forward
Meanwhile, we have initiated the Phase 2: Technology Stack. I think this is the most exciting part! I am working Arnis: a no-brainer dependency tracker for .NET applications. Arnis is available on Github and appreciate your contribution 😉

I will document our journey in this blog and hope you share how you do this in your company too.

Filed under: architecture, governance