Since my last post, I moved on from Timely to join Sharesies, and then moved again to join GitLab as an Engineering Manager.
Sharesies was a great place to work overall, but I had the opportunity to join GitLab as an Engineering Manager in their Reliability department, and there were signs that things at Sharesies were slowing down a bit.
Somewhat ironically, Sharesies announced lay-offs a couple of weeks ago, followed just recently by GitLab announcing a 7% staff cut – which unfortunately happened to include my role. In the words of Kendrick Lamar; Damn…
GitLab learnings
I was only at GitLab for ~7 months, and technically still am (haven’t had the final “see ya!” call yet), and I don’t regret it at all!
I went into the role uncharacteristically subdued – mostly, I think, because I was in awe; being able to work at this “big tech” company I’d looked up to for so long.
It was hugely valuable, but perhaps not for the reasons you’d assume; rather, I learnt a lot by seeing just how “normal” GitLab is behind the curtain.
Don’t get me wrong; there are some amazingly talented people working there, but (and perhaps I’ve been lucky in this regard) in my experience there have been equally brilliant people in practically every place I’ve worked to date. Perhaps we (in NZ) punch above our weight? Or my expectations were just inflated?
Nevertheless it was good to see firsthand that they experience the exact same growing pains, technical constraints, and people-related frustrations that every single other company I’ve worked with has gone through – perhaps just at a different scale.
If I could have my time again, I would absolutely be more proactive and willing to “be myself” sooner. A valuable lesson to take away.
To be fair there were some personal/family issues late last year which meant I was checked out of work for essentially a month, which didn’t help matters.
The unfortunate part is that, due to my slower onboarding process, I only just felt like I was starting to have real impact this year. Murphy’s Law I guess. š
Next steps
So, what’s next? I’m not sure…
I posted my news on LinkedIn just yesterday, and it honestly feels like it was a week or two ago due to the amazing response I’ve had and how busy it’s kept me.
My partner, Aimee, is in Australia this and next week for a wedding, so while I’ve been super busy fielding phone calls from recruiters, replying to friends in various Slack groups, and trying to update my CV and LinkedIn profile – I’ve also found myself (more than once) just randomly sitting here reflecting on how awesome people have been, and all the nice things folks have said about working with me in the past.
More than a few lumps-in-throats have been experienced, I can tell you. š
Having considered what’s made me happy in previous roles, I think my ideal job (in a nutshell) would be working with a smaller company who need help scaling their technical systems and teams.
I haven’t been on the tools in a while, so a purely engineering role is off the table, and to be honest feels a little too “single threaded” to me – i.e. tends not to have the multiplicative and strategic impact I’d like to have.
So something like an executive-level position in a start-up, through to a senior engineering manager kind of role in a larger, more established company. Something that will both push me to learn and grow, but still allow me to provide value early on.
I’m also always keen to help smaller start-ups who don’t need someone like me full-time, or perhaps can’t even afford such an engagement. I’ve done a few pro bono consulting gigs now (although I’ve been known to accept the odd beer as payment too) šŗ, so reach out if you have a worthy cause which needs help getting started or scaling for growth, and we can jump on a few video calls at least. š
Well, we are certainly living in interesting times. The world seems to have gone to hell in a hand-basket over the last few weeks, doesn’t it? As the world is struggling to cope with finding a “new normal” amidst COVID-19 chaos, I thought I’d jot down a few rambling thoughts from my neck of the woods…
Coping with the shift to working from home (remote working)
We here at Timely are probably in a very fortunate position in that we already work remotely, and mostly from home at that. So when the various countries that we have staff in (NZ, AU, UK) went into lock-down, we didn’t really have to change too much about the way that we work.
For a lot of other people, this whole “working from home” thing is new and scary. I get that. I remember moving from never really having worked from home almost 6 years ago, to working at Timely where we were 100% work from home. It took me months to get settled, and to find my “rhythm”. So the single biggest piece of advice I have here is not to sweat the small stuff, and don’t put too much pressure on yourself to be getting back up to 100% effectiveness too quickly.
I won’t reiterate what a million other websites have already covered off (in terms of “top tips for working from home“, etc), but the few big ones for me are:
Setting aside a dedicated work space
Whether a spare room, a desk, a dining room table, whatever… make it your spot where you do work, and that’s all. Try not to use it for out of hours stuff like gaming, YouTube browsing, etc. Otherwise things tend to blur and you’ll either end up never “unplugging” from your work, or you’ll catch yourself watching 6 hours of cat videos on YouTube when you should be preparing those TPS reports.
Change your expectations around communication
When working in an office you get what I think of as real-time unconscious information all the time. Want to know if your colleague is available for a chat, just look up. Overhearing progress updates from a neighboring team’s morning stand-up. Seeing that a team-mate is busy working on that script you need for a deploy tomorrow.
All of that goes out the window when working in isolation. You (and your team) need to get used to communicating more frequently and specifically. It won’t come naturally at first. You might feel like you’re “nagging” people, or interrupting them too often, but good communication will avoid frustrations down the line where you’re being held up because you’re not sure if someone else is working on what you need right away, or taking the dog for a walk instead.
We use Slack a lot, but don’t rely on written comms too much (whether Slack, email, etc) – get used to jumping into video calls, even just for unscheduled 2-5 minutes chats throughout the day. Think in terms of where you were normally just lean over your desk to ask a coworker what they think about using this framework versus that one, or whether they’ve heard of the newest changes to the SaaS product you’re rolling out. Just spin up a call – it’s much quicker than typing out paragraphs in Slack, and gives you an excuse to chat to someone, even if just a little bit.
Managing teams remotely
Leading a team in this environment is definitely more challenging than doing so in a collocated work-space. A lot of what I said above about communication being critical is even more applicable when leading a remote team. Don’t discount the value of continuing things that your team might be used to doing in the office, such as morning stand-ups, maybe a social catch-up over lunch, or a beer at the end of the day, etc. All of these can be done remotely over a Hangout or Zoom meeting. It may be awkward at first. No, it will be awkward, but embrace the awkwardness and push through it.
Over the past week or two my team have actually gone from just our normal 9am virtual stand-up each day, to having a second catch-up each afternoon as well. It’s not as structured as the morning stand-up, and is more just for chatting about how everyone’s days have gone, and usually devolves into poor attempts at joke telling, etc. All of this doesn’t have to take a huge amount of time. Some days when we’re “just not feeling it” they could be over in 5 minutes, and other days they’re 20 minutes of almost-crying-from-laughter – which is just what we all need a bit of at this time.
As a wider company we’ve also become more active in our social channels in Slack; these are channels related to DIY, gardening, beer brewing, the gym, Lego, and a million other niche topics. Friday afternoons also tend to be more active, with various groups spinning up video calls where people can grab a beer/wine/water and have a bit of a virtual-social catch-up to close off the week. These have always been super-awkward when we’ve tried them in the past, but strangely now that they’re all we’ve got, they’re actually great fun! The trick is to limit them to small-ish groups (<12 people) and for someone to be ready with a few conversation starters if those awkward silences start creeping in. Give it a go and comment with your results, or any other tips you discover along the way. š
Remember that people are dealing with a lot right now, so on some days your team will be firing on all cylinders, and on other days people will be withdrawn, or overly sensitive, passive aggressive, etc. The best you can do is try to gauge this as quickly as possible and then tailor your interactions to suit.
Dealing with isolation and social pressure
All the above stuff talks about dealing with working from home, or managing a team during these trying times – but the most important piece of this puzzle is making sure you’re looking after yourself. This is something I definitely battle with.
Before having to isolate I enjoyed going to the gym every day. My partner is an IFBB bikini competitor who was due to compete just a few weeks ago (before it was cancelled due to COVID related travel restrictions), and I’ve dabbled in powerlifting for years now. So naturally my social media is now filled with people going on about home workouts, staying motivated, videos of people doing all these awesome, clever workouts, or going for runs… you get the idea. At the same time I feel zero motivation to exercise, and am stuck inside eating too much food (and probably drinking more whisky than usual), which makes me feel even worse.
Similarly there’s a lot of pressure at the moment to use this “free time” you now have to better yourself. Every other person seems to be earning their MBA or getting a bloody masters in something from some online university… and here I am still trying to finish that 6 hour Pluralsight course I’ve been busy with for the past year. Never mind the guitar I’m trying to learn that I haven’t picked up in 2 months…
On top of this (as if it wasn’t enough) there’s stress and uncertainty about our jobs, the welfare of family and friends (I’ve got family in South Africa, which is definitely less equipped to deal with this than New Zealand is), making sure our kids are coping (and still learning while out of school), and a multitude of other things.
So I think the answer is to just not give a f__k… for now. There are probably more eloquent ways of phrasing it – but basically I think that, just like it takes time to get used to working from home, it’s going to take time to adjust to everything else that’s changed in our daily routines. So I’m not going to worry about putting on a few extra kg’s, or my pile of unread books not shrinking as quickly as I’d like – in the same way that you shouldn’t worry about your work productivity taking a hit when you first start working from home.
Each day I’ll try to stick to a routine as much as possible. When it works – great! But when it doesn’t, there’s always tomorrow to try again.
I’ve been wanting to resurrect this blog for a while, so what better way to do so than advertising the fact that I need some help at work? š
Here at Timely I head up the Platform team. We’re currently a team of 5 (including myself) covering areas such as internal support, reporting, data and databases, devops and security, and performance and tooling.
We’ve done a kick-ass job so far (if I do say so myself), but that backlog is starting to grow faster than we can knock out tasks. There are operational and DevOps related projects on the back-burner, and a bunch of security related enhancements we want to make too.
So we’re going to plug a few gaps in the team by finding someone to look after our core infrastructure (based almost exclusively on Microsoft Azure). This person will also kick it up a few notches by finishing our IaC implementation (Infrastructure-as-Code, using Terraform) and automating away as much toil as possible. There’ll also be projects like improving our DR capabilities, taking our operational monitoring platforms to the next level, and a bunch more!
In my previous blog post I talked about BIML, and how it might revolutionise my approach to creating ETL processes. Itās pretty cool, and very powerful, but there is a bit of a learning curve, so I decided to look for a different way to achieve the same thing, but that required less upskill-time, and preferably less development time too.
So, the ideal solution will:
be quick to build initially, and easy to maintain in the long run.
allow for parallel data loads to make the best use of the available resources.
allow for ad-hoc changes to the load or schema without having to open, make changes to, and re-deploy the SSIS package.
I briefly tested several other methods (most of which involved generating large amounts of dynamic SQL and executing that against your source and/or destination). I instead decided to try out an SSIS add-on package called āData Flow Task Plusā, which I’d never heard of before.
What is it?
A company called CozyRoc has developed a set of new components, and extensions to existing components within SSIS, making them a whole lot more powerful than what you get out of the box. This is nothing new, in fact you can develop your own components relatively easily if you so choose (in fact even I’ve dabbled with this many moons ago, trying to read CSV files with annoying formatting “features”).
Data Flow Plus lets you configure dynamic data flows. You can control various options via package or project parameters, which means less time spent opening packages to edit them when your source or destination schema changes. Basically this means you can create āschema-lessā ETL packages which will just transfer data from a source table to a destination table, even if you add or remove (or change) columns! Too good to be true, right?
The Pudding
As they say, the proof is in the pudding, so hereās some pudding⦠figuratively speaking. Nothing like some green ticks in SSIS to make your afternoon!
Thatās the end result of my proof-of-concept, but donāt worry, Iāll step you through it.
First-things-first, youāll need to go to the CozyRoc website and download the package, either 32 or 64-bit depending on your requirements.
Once thatās done and you open Visual Studio, youāll notice a bunch of new components in your SSIS Toolbox. The only one Iām covering here though is the new Data Flow Task Plus (highlighted), although I may cover more in future as there are a couple that sound interesting (like parallel foreach loops!).
New Plan
So my plan is to have table metadata stored in a table on the destination (Azure Data Warehouse) database, which is queried by the package and stored in package variables. I’ll then iterate over the list of tables, do my ETL (depending on what kind of load Iām doing), and finally load the data from the source system. Sounds simple enough (… and it is), so let’s get started.
And yeees I know this isn’t really much of an “ETL” process… but “ELT” doesn’t roll off the tongue as easily. :-p
Hereās a SQL script to set up for this proof-of-concept if you want to follow along. It creates 2 databases (a source and a destination), as well as a table to store metadata about the tables I want loaded from one to the other.
CREATE DATABASE DWSource;
GO
CREATE DATABASE DWDestination;
GO
USE DWDestination;
-- DROP TABLE LoadConfiguration
CREATE TABLE dbo.LoadConfiguration (
Ā Ā Ā LoadStream TINYINT NOT NULL,
Ā Ā Ā TableName NVARCHAR(100) NOT NULL,
Ā Ā Ā SqlCreateStmt NVARCHAR(MAX) NOT NULL,
Ā Ā Ā IndexColumnName NVARCHAR(100) NOT NULL,
Ā Ā Ā LoadType NVARCHAR(20) NOT NULL,
Ā Ā Ā ColumnListToLoad NVARCHAR(MAX) NOT NULL
Ā Ā Ā )
-- These are very simplified versionsĀ of a few tables in our (Timelyās) database. You'll need to create them in the source database if you want to test this yourself.
INSERT LoadConfiguration VALUES (1, 'Booking', REPLACE('CREATE TABLE [dbo].[Booking](
Ā Ā Ā [BookingId] [int] NOT NULL,
Ā Ā Ā [CustomerId] [int] NOT NULL,
Ā Ā Ā [StartDate] [datetime] NOT NULL,
Ā Ā Ā [EndDate] [datetime] NOT NULL,
Ā Ā Ā [Price] [money] NULL,
Ā Ā Ā [BusinessId] [int] NOT NULL
)','NOT NULL','NULL'), 'BookingId', 'Full', 'BookingId, CustomerId, StartDate, EndDate, Price, BusinessId')
INSERT LoadConfiguration VALUES (1, 'Business', REPLACE('CREATE TABLE [dbo].[Business](
Ā Ā Ā [BusinessId] [int] NOT NULL,
Ā Ā Ā [Name] [nvarchar](100) NOT NULL,
Ā Ā Ā [DateCreated] [datetime] NOT NULL,
Ā Ā Ā [Description] [nvarchar](max) NULL
)','NOT NULL','NULL'), 'BusinessId', 'Full', 'BusinessId, Name, DateCreated')
INSERT LoadConfiguration VALUES (1, 'Customer', REPLACE('CREATE TABLE [dbo].[Customer](
Ā Ā Ā [CustomerId] [int] NOT NULL,
Ā Ā Ā [BusinessId] [int] NOT NULL,
Ā Ā Ā [FirstName] [nvarchar](50) NULL,
Ā Ā Ā [LastName] [nvarchar](50) NULL,
Ā Ā Ā [DateCreated] [datetime] NOT NULL
)','NOT NULL','NULL'), 'CustomerId', 'Full', 'CustomerId, BusinessId, FirstName, LastName, DateCreated')
With this proof-of-concept I want to test that I can create tables, prepare them, and then load only the columns that I want loaded.
Variables & Expressions
A small but important part of creating a package like this is making sure you get your variable expressions right – i.e. make the various SQL statements and values you use as dynamic as possible. As an example here are my variables for this little package. Note the expression column and how values are stitched together when it comes to building SQL commands used by the various components.
From top-to-bottom, we’ve got:
ColumnListToLoad – this is theĀ list of columns from the source table that I want loaded into the destination table.
IndexColumnName – the name of the “ID” column that I can use to tell where to load from if doing an incremental load. In the real world I’ll probably make the package handleĀ either Id’s or DateTime columns, because withĀ some tables it will make more sense to load based on a load-date.
IndexColumnValue – if doing anĀ incremental load, then this variable will be populated with the max IndexColumnId already loaded into the data warehouse.
LoadSettings – the System.Object variable which will hold the full result set of the initial SQL query, and feed it into the ForEach loop container. Nom nom nom…
LoadType – whether we’re doing aĀ Full orĀ Incremental load. Could cater for other load types here too.
SQL_DeleteStatement – a SQL delete statement based on an expression. If doing an incremental load then this will delete any data that may exist after the current max IndexColumnValue, which should help prevent duplicates.
SQL_DropStatement – a SQL table drop statement. Probably didn’t need to be a fully dynamic expression, but for some reeeeaally important or large tables, you may wantĀ to disable accidental drops by putting something harmless in this variable for those specific tables.
SQL_LoadStatement – a SQL select statement which will pull the data from the source table. This select statement will make use of the ColumnListToLoad variable, as well as theĀ SQL_WhereClause variable if performing an incremental load.
SQL_MaxIdValueStatement –Ā SQL statement to get the max Id value and populate the IndexColumnValue variable.
SQL_WhereClause – snippet of SQL depending on whether we’re performing an incremental load, and the value of theĀ IndexColumnValue variable.
SqlCreateStatement – The SQL create table statement for the destination table. In this example it’s just an exact copy of the source table. I tend to pull production data across into tables matching the source schema, even if my “ColumnListToLoad” variable means that I’m only loading a subset of columns. This means that if I need to add columns to the load later, I don’t need to change the create scripts.
TableName – the name of the source (and in this case, destination) table.
The Package
Here’s theĀ steps in my package (and a chance for you to admire my l33t Windows Snipping tool handwriting skillz!). Note that I’m not going to go into a whole lot of detail here, because the purpose of this post isn’t to cover all things SSIS. Instead I’ll link to other sites which explain each step or series of steps more clearly.
2. Use a ForEach container to loop through each ‘row’ in the above object variable, assigning the individual values to variables scoped to the container.
3. There are separate sequence containers for Full and Incremental loads. Their disabled states are set via an Expression which is based on the value from the [LoadType] column grabbed from the [LoadConfiguration] table above. So, if we’re doing a full load, the Incremental load container will be disabled, and vice versa. Another (possibly better) way of doing this would be to use precedence constraints with expressions to control the path of execution.
4. As above, but for the ‘Incremental’ [LoadType] value…
5. Load data using the new data load plus component. The best way to figure out how to do this is to watch the (rather dry) video from CozyRoc on this page. But basically it involves setting up the component just like you would the normal data flow task, but then removing all columns from the outputs and inputs (using the advanced editor), and leaving only a single “placeholder/dummy” column. This placeholder column is brilliantly named “THUNK_COLUMN”.
Dunno… haven’t finished implementing the real thing yet. But the proof of concept is working well, and it went together pretty quickly, so I’m positive this will work, I think…
I’ll update this post with my thoughts once I’ve got it all working. As usual please let me know if I’ve made any glaring mistakes, or if you’ve got some awesome ideas on how to improve this process further.
Iāve used the BIDS Helper Visual Studio add-on for years now, and Iāve seen and heard of BIML, but itās one of those things Iāve never needed to look into any further than that. Until I discovered that itās something that wouldāve saved me hours of tedious SSIS work!
What is it?
BIML (Business Intelligence Mark-up Language), or more specifically, BIMLScript, is sort of a mashup of XML and C# code nuggets, allowing you to create SSIS and SSAS packages. This is very much the condensed āDBDaveā version ā check out the official site for a much more eloquent explanation of what it is.
Basic Example
When you open up your SSIS project in Visual Studio, if youāve got BIDS Helper installed, then when you right-click on the project you have the option of adding a BIML file:
Itāll create a new file under āMiscellaneousā in your project. Go ahead and open it up and youāll see something like this:
You can āexecuteā a BIMLScript by right-clicking on it, and selecting āGenerate SSIS Packagesā:
Now we can jump in the deep end and paste the following into this new BIML script:
Yeah, okay, letās step through this to figure out what it does. Iāll show you what each bit of code results in too, which might help make it more tangible/understandable:
First we setup the connections that will exist within the package. These are just connections to tempdb on my local SQL instance for testing. This bit results in this:
Next up, we specify the project and some project parameters that weāre going to use within the package:
There are some gotchas regarding project parameters in BIML when using BIDS Helper to check and run your BIMLScript, so keep that in mind. As per this example, you need to specify the project parameter definitions in here, even if they already exist within your project.
So because of these issues, I found it simpler just to make sure the parameters already exist, like this:
Now we create the package itself, and substitute in some of the package parameters, which in this case weāre using to replace parts of the connection strings for our source and destination connections.
Weāve got an āExecute SQLā component running a truncate of the destination table first. However, we only want this to run if weāve set our project parameter āDoTruncateā to true.
And lastly a Data Flow task to move data. This is done using a SQL query with a parameter for a āKeyDateā column, as an illustration of what you might do in a real-life situation.
Cool! Now what??
So thatās BIML in a very small nutshell. Even if thatās all youāre doing with it (i.e. creating pretty basic packages) I think itās worth doing since it makes source control of your packages SOOOOOO much nicer!
Imagine getting a pull request from a developer whoās made some SSIS changes, and simply being able to diff the BIML scripts to see exactly what theyāve changed!?
But wait, thereās moreā¦
In the scenario that lead to discover BIML, I wanted to create a ādynamicā SSIS package, that was driven by metadata stored in a database. In other words, I could maintain a table with a list of table names that I wanted āETLādā from my production system to my data-warehouse, and my magic SSIS package would pick up changes, new tables added, etc without me needing to open and edit one monstrous package.
This is where the power of BIMLScript and itās C# nuggets really shines. It lets you drop in complicated logic in C# code to control and mould the output of the BIML. So you could look up a list of tables to load, then iterate over that list, creating packages per table. Check out this post for a lot more detail (and examples) on how to achieve this.
Thatās it for now. Thereās lots of more detailed examples around if you look for them (Google is your friend), and I just wanted to highlight the possibilities which I didnāt realise were there before. Hopefully you find it as useful as I did.
Iāve recently needed to move data from our transactional database (an Azure SQL database), into an Azure SQL Data Warehouse. A definite case of āharder than it needed to beāā¦
Whatās an Azure Data Warehouse?
Iāll assume if youāve read this far, you know what a SQL database is. But an Azure Data Warehouse is a slightly different beast; itās another hosted/managed service offered on Microsoftās Azure cloud infrastructure. They market it as a distributed & elastic database that can support petabytes of data, while offering enterprise-class features.
That essentially means that behind the scenes this ādatabaseā is actually a bunch of orchestrated nodes working together (a control node, multiple compute nodes, storage, etc). Queries against this distributed database are themselves split up and run in parallel across these nodes ā i.e. āMPPā, or Massively Parallel Processing. Thatās very much it in a nutshell ā for a lot more detail though, read this as well.
Why use this over other solutions?
I originally set up an old-school SSAS instance on an Azure VM, backed by a normal SQL Server data warehouse hosted on the same VM. Not very exciting, but it worked. The struggle was that to get data from our production database (an Azure SQL Database) into this VM required either SSIS packages pulling data across the wire, or a restore of the prod database locally (i.e. onto the VM) and then extracting the data from that using cross-database queries.
Then I read up on these relatively new Azure Data Warehouses, and I assumed that *surely* there would be a much simpler/better way of moving data directly from one to the other natively, within the ācloudā.
āCloud-to-cloudā ETL FTW!
I asked the question, and the consensus seemed to be that Data Factory is the cool new way to move your data around the cloud. So I gave that a crack. Be warned, youāll need to brush up on JSON (since youāll need to be comfy writing/modifying JSON to setup the data sources, control the pipelines, etc).
All the examples I found seem to involve Blob-to-SQL, or SQL-to-Blob data loads. So I figured out how the bits and pieces work together, how to customise the JSON to setup the correct sources, pipelines, etc, and then kicked it off. It didnāt work⦠<sadface>
The issues I ran into were definitely solvable (data type conversion issues mostly) ā but given my noob-ness with JSON and Data Factory in general, as well as the fact that it felt really clunky when trying to change schema quickly, I decided to be boring and revert back to good olā SSIS instead.
I feel like thereās a huge gap here for someone to build a simpler data load tool for this!Ā And yes, I did also try using the Data Factory āCopy Wizardā (still in preview at this stage). While it did allow me to setup a basic table copy, I then wanted to modify the JSON pipeline slightly due to some data type issues, and amusingly the Azure Portal threw an error when I saved my changes because the default quota limits pipeline JSON objects to 200KB, and mine was *just* over that. You can request for this to be increased, but I couldnāt be bothered and basically ragequit at this point. š
You see, the problem is that when youāre the sole infrastructure & database guy for a smallish start-up company, you donāt have time to spend a few days learning the ins-and-outs just to setup a basic data transfer. I need something that just works, quickly, so I can move on to solving tickets, optimising database performance, flying, checking on the test/dev environments, etc, etc, etcā¦
Iāll keep an eye on the copy wizard though, as Iām sure theyāll improve it over time, and it seems to be the closest to what Iām looking for at this stage.
Itās not all bad
Having said all of that, Iām still sticking with SQL Data Warehouse as my BI/BA back-end, and have been impressed with the performance of loads (even just done via SSIS packages) as well as query performance.
I made sure to split the data load aspects of my package up so as to utilise the parallel nature of SQL Data Warehouse, so Iām guessing that will be helping performance.Ā Iāve also built some proof-of-concept PowerBI dashboards over the top of the data warehouse, which was ridiculously easy (and quite satisfying).
Let me know if youāve had any similar experiences (good or bad) with loading data into SQL Data Warehouse, or moving data around within the cloud.