Day 1 – The precon
‘Data Warehouse modelling – making the right choises’ by Davide Mauri (site|blog|twitter) and Thomas Kejser (blog|twitter) was a very good walktrough of the aspects, seen from a architectural angle, of building a data warehouse.
With a, for me, very sad announcement that this is Thomas Kejsers last round of SQL Server speaks you need to be quick to catch a glance of his knowledge. With new tools and scripting they showed how to bring down the ‘monkeywork’ in every project, giving spare time to use on business analysis and speaking with the end users. Also a new agile approach to dimensional modelling – still on the SQL server, not in analysis services – gave the descissionmakers possibility to change their mind regarding slowly changing dimensions and history attributes.
They got me hooked on the new BIML scripting (referencal link) to build SSIS packages VERY FAST based on a metadriven approach. “Build 100 SSIS packages in 3 sec”.
Day 2 – keynote and sessions
The keynote preseneted by the founders of PASS SQL Rally. Main speaker of the keynote was Jim Karkanias speaking of the new buzz-word in the comunities ‘big-data’ and demystifying its background and layers. A good approach to what big data is and what it can be used for. Next up at the keynote was Judy Meyer speaking of the Excel features regarding big data. Microsofts base app for playing with data and the different datasources around the world.
With a start trough Power Query we really got a good understanding of the features in Microsofts Power-x pack. A good start on the sessions with Kevin Kline (blog|twitter) and his ‘SQL Server internals and Architecture’.
His analogy of a pit team in a racing team knowing everything about a cumbustion engine – people in a team working with SQl Server should know how the engine works. ACID properties of transactions – Atomic – just them selfes, Consistent – the same every time, isolated, durable – all or nothing. All acid properties gives overhead and CAN slow down the transactions. A good walktrough of the different engines and components of a read action and a write action.
Window functions put to the max performancegain. A very good view of the evolvement of queries from SQL 2000 to SQl 2012. Digging through key elements of a window function – how to use them and through them gain high performance. After a good lunch we headed on with Brent Ozars (blog|twitter) ‘How the SQL Server engine thinks’ A different approach to the traditional slideshow – Brent used the audience as a SQL Server. We all had sheets of paper with data. Brent being the end user asking us (the SQL Server) for data. A funny way to do it, and it actually worked – we all learned new things. Even though it was a level 100 session. A good session right after lunch where we all naturaly are a little touched by the digestion.
To twist our brains and for those who were ready to really listen, the session ‘Using your brain to beat SQL Server’ by Adam Machanic (blog|twitter) and Thomas Kejser. Mathematics on a very high level and deep SQL Server internal knowledge gives the two guys awesomeness in their work.
This evening the event had arranged a good dinner and entertainment. The entertainment was two very good guys – magicians – who did a very good job with illussions and magic tricks.
Day 2 – more sessions
Kicks off with ‘High availability of SQL Server’ with Tobiaz Koprowski (twitter).
It’s important to have your data wehn you need it and always when you need it. High Availability of the SQL Server can therefore be important to implement. Tobiaz got around the subject in a good and practical manner. His knowledge and knowhow is high which shines through his presentation, I got enlightened.
Davide Mauri had a another session ‘Automate DWH patterns’ – a deeper dive into the pre-con subject about BIML scripting and metadriven DWH Development.
I’m hooked on it – and definately BIML is the next focus for me.
The last session for this event for me was ‘Analytical hierarchies in Cubes’ – a very good one indeed.
Instead of having alot of measures based on time (eg: YTD, Last month, Last week etc) it is simply possible to make a dynamic calculation hierarchy based on the desired calculation. After implementation, the user can now choose what measure and calculation to use from a hierarchy instead of pulling all the possible measures into the pivot table. the user experience is also alot better, as the list of measures are alot smaller. I’ll have to look into that also.
A very inspirational event. Even though some of the sessions could have been on a higher level from my perspective.
I didn’t see much (read: none) of Stockholm as the event was held inside Arlanda Airport – but Again, it’s not about culture on these events, it’s about learning and bringing back new knowledge and share this knowledge with the collegues.
Looking forward to the next event – where ever that will be.