Skip to content

Embrace changing requirements

I’ve worked for a few different companies over the years, some small and some big, and one thing that really sets the tone for a lot of inter-team interactions is how the company deals with changing requirements.

Larger companies, and some small/medium companies that feel like they need to start the transition to a larger company, have a tendency to lock in requirements at the start of a piece of work. The developers and the BAs want to have a document that lays down exactly what will be built before any work starts, and have processes in place for changes once work has begun. This is often adopted as a way to protect developers. For smaller companies there will have been incidents in the past where pieces of work have ballooned in size, and this obviously leads to the work taking longer than planned, and this in turn often leads to negative consequences for the developer. The solution someone arrived at was the rigid requirements process, and now everything runs more smoothly.

Except things probably aren’t actually running smoothly under the surface. What’s most likely happened is you’ve traded the issue of ever-changing requirements for a different problem; nobody’s getting what they want any more.

There are two primary reasons for this: firstly, people rarely know exactly what they want; and secondly, when they do know, they suck at articulating it properly. Because of this, when you have a rigid system that asks for all requirements to be defined up front you get an imprecisely worded attempt at describing something that was only ever half an idea in someone’s head anyway.

My experience has been that this goes double when talking about reports, inevitably if someone specs a report and you produce the report to their spec, what you’ll hear is something like “well, that’s good, but looking at it what I really want to know is…”

So, what do you do? You can’t go back to the old way, because the business needs some ability to plan how long things will take, but the new way means the dev teams aren’t delivering value a lot of the time. Often companies don’t address this directly, and you see a lot of bitterness appear between the dev team and the rest of the business as a result. The business resents having to specify everything to the nth degree, and still not getting something they want at the end, and the dev team resent doing exactly what they were told to and getting criticism for it.

The answer, in my opinion, is to embrace the fact that requirements will change and work with the business to deliver valuable work. You probably read that sentence thinking it sounds nice, but doesn’t offer any practical advice, and you’re right. That is the mission statement, not a roadmap of how to get there, but there are some more practical steps you can take:

Define a goal for the piece of work, rather than an exact specification. So, maybe instead of defining every item in a report, you have a goal something like “produce a financial report to provide monthly key metrics on our clients”. Maybe you would have a bit more detail in there, but it wouldn’t define every item in the report. Agile proponents will recognise this as being similar to the user story one-line descriptions that usually begin with “As a user I want to be able to …”

The flip of this is continue to reject requirements changes that completely transform the work, or are effectively new pieces of work entirely. So, if part way through developing the monthly client financials report you get a request that it’s now needs to hook into the internal time tracker system, the HR system, and the billing system, to provide extra details, that’s a fundamental change to what you’re doing so there needs to be a new estimation and a change to the plan. Similarly, if you get a request to add a second report on client loyalty, that’s a new requirement and needs estimating and scheduling separately.

Code with extensibility in mind. It should be easy to change details of your code to adjust to changing requirements. If you make your code overly complex, or put everything in one big procedure, or have steps that are tightly coupled together, it will be harder to adjust when the requirements shift.

Keep defining your understanding of the requirements as you go, and checking with the business stakeholder.

Keep producing prototypes and sharing them with the business stakeholder.

These two go hand in hand with the next one: make sure the business stakeholder is available throughout the process. If they are, and if you are communicating your vision and progress with them, and if you are getting timely feedback from them, then you have a fighting change of producing something they will be happy with at the end.

None of this is easy, and if your company is wedded to a fully waterfall approach it can be flat out impossible to achieve, but if you can make it work there are a lot of benefits. In my current job, I’ve spent some time moving one of the projects I’m on to this more collaborative approach, and people are starting to notice that it’s working. I’ve been able to make this happen as just a developer, with no authority over requirements or project management at all, just by regularly checking in with the business sponsors, and being open to requirement changes. It’s helped that the business sponsors have made themselves available for this, not all business sponsors will and at that point you can either give up or start raising things with managers and aiming for a more formal change in how your company works. That may be the next step for me, and, if I do go down that route, having this existing project as evidence will be a key part of my argument.

Some people may be looking at this and thinking that this is only relevant to waterfall companies, or companies that have adopted agile light practices, but I have seen companies adopt every bit of scrum but still falling into these traps.

In conclusion, requirements will change, it’s a fact of development that pretty much nobody knows exactly what they want at the start of a bit of work, and you can either ignore it or embrace it. If you embrace this fact, and work with it, you increase the chance that you will produce software that adds business value, and ultimately that is what we should be aiming to do.

When to allow NULLs

Some years ago I did a post on NULLs where I detailed some of the issues you can get with them. More recently I did another post on NULLs that detailed an example of an interesting issue some NULLs had caused in work when looking at an IN statement.

Now I want to talk a bit about when we should be using NULLs and when we shouldn’t, inspired again by some things I’ve seen in legacy code a few days ago. And when I say using, I mean when should we allow values to be NULL, in tables, and parameters, and anything else.

The first thing to remember is what does a NULL represent. It’s very simple, NULL means “I don’t know”. If you keep that in mind, and you’re practising some good development design where everything has a point, you should be able to tell if it’s possible to be unsure about any particular value for a row.

A quick example:

CREATE TABLE dbo.Customer
    (
          CustomerID INT IDENTITY
        , Forename VARCHAR(100)
        , Surname VARCHAR(100)
        , MiddleNames VARCHAR(300)
        , IsEmailable BIT
        , MobileNumber CHAR(11)
        , HomePhoneNumber CHAR(11)
        , DateOfBirth DATE
    )

Now, I’ve not included any NULL/NOT NULL in this example, because I want to talk through a few of these columns before I give my opinions.

  • CustomerID – This is the primary key of the table. It uniquely identifies every row. We definitely want this to not be NULL, and we are using an IDENTITY value (another option here is to use a SEQUENCE object) to make sure we always have a value.
  • MiddleNames – Not everyone has a middle name. We could set this up to not allow NULLs and force a default like blank string (maybe by setting a default in the column definition), or we could allow NULLs. Either approach seems legitimate here.
  • MobileNumber/HomePhoneNumber – Again, not everyone has a mobile, or a home phone number. NULL seems like a legitimate choice here, especially as we don’t have the empty string option because any value we use has to be 11 characters. We could default to 11 spaces, but then whatever reads this table has to code around that in pretty much exactly the same way it would with NULLs.
  • Forename/Surname – Most people have both a forename and surname (sidenote, I hate when I see FirstName and Surname, it’s either Forename/Surname or FirstName/LastName). But not everyone does, there are some cultures where it’s less normal, and even within western cultures you have individuals like Cher or Sting. What if they end up in your database for some reason? Even if you are assuming everyone will have a forename and surname, your data may not be the best quality. In these cases you need to make a judgement call, you can either reject data without forename and surname, or allow NULLs or some default value in your database.
  • DateOfBirth – Everyone definitely has one of these, but your source data may not always capture it. In this case in particular, a default value can do odd things, and you are safer with allowing NULLs or rejecting any data without a date of birth.
  • IsEmailable – This is the column from real life that triggered this post. In our table it was set to allow NULLs, but I think that is a fundamental mistake. This column is used to decide if we can send an email to someone or not, and regardless of the quality of our source data, we have to know if we will include this person in our email campaign or not. Now, we may get customers who come through without any kind of email permissions in their data set, but when we run an email campaign we either include that customer or not (usually not) so that should be written into the insert into this table, and do not allow a NULL in that column.

What it should look like:

CREATE TABLE dbo.Customer
    (
          CustomerID INT IDENTITY NOT NULL -- primary key
        , ForeName VARCHAR(100) NOT NULL -- everyone has at least 1 name
        , Surname VARCHAR(100) NULL --  not everyone has multiple names
        , MiddleNames VARCHAR(300) NULL -- not everyone has a middle name
        , IsEmailable BIT NOT NULL -- we always know if we will send an email or not
        , MobileNumber CHAR(11) NULL -- not everyone has a mobile number
        , HomePhoneNumber CHAR(11) NULL -- not everyone has a home phone number
        , DateOfBirth DATE NULL -- we don't always get this in the data and it isn't vital to this solution
    )

NULL/Default

In a couple of the columns I said you could use a default or a NULL. I prefer NULLs when you genuinely aren’t sure, because defaults can do odd things to your queries. For example, say you have a default in the DateOfBirth column of ’01-Jan-1900′ because you’re confident that’s not going to be a real value. But, for whatever reason you need to find anyone who’s over 18, so you include a where clause of DATEDIFF(yy, DateOfBirth, SYSDATETIME()) >= 18 and you bring back everyone who’s date of birth you don’t know, and potentially you send some unsuitable mail to a minor and get sued. Leaving DateOfBirth as NULL keeps those rows out of any queries that are filtering on DateOfBirth, and means you have to make a conscious decision to include them.

Conclusion

NULLs are part of the T-SQL language for a reason, and they help capture cases where you might genuinely not know the value of something. Just setting every column to allow NULLs causes you some issues down the line, as every bit of code then has to handle that possibility, but not allowing NULLs when they could legitimately occur can create other issues. You should consider whether to allow NULLs on a column by column basis, both because future developers will have to account for your decisions in their code, and because the decision you make sends a message to those future developers about the shape of the data you are expecting to see inputted into the system.

Service Broker 101 Lesson 6: XML and messages

So far we have looked at assembling the Service Broker components, sending messages, and handling them through stored procedures. But in all of the examples, we have only ever treated the message body as a single value that we do something with, like add to a table. This lesson will go through some of the fundamentals of querying XML values to extract information. If you’re already familiar with XML in SQL Server there probably won’t be anything new in here for you.

What is XML?

XML stands for extensible markup language. It’s a way of encoding information to send between services is a semi-structured way. XML works using the concept of elements, each element can contain other elements. An element can also contain a value, or it can be an empty element, and an element can have one or more attributes attached to it. Elements are tagged using the <> notation so an element called SomeElement would appear in XML as <SomeElement>value</SomeElement>. If the element is empty, that can either be represented as <SomeElement></SomeElement> or <SomeElement />. Attributes are included in the opening tab for the elements, like so: <SomeElement Attribute: "AttributeValue">element value</SomeElement>. A more complex example is:

<people>
  <person personID: "123">
    <firstName>
      Larry
    </firstName>
    <lastName>
      Smith
    </lastName>
  </person>
  <person personID: "124">
    <firstName>
      Joanne
    </firstName>
    <lastName>
      White
    </lastName>
    <middleName>
      Rachel
    </middleName>
    <pet>
      <name>
        fluffles
      </name>
      <animalType>
        Dog
      </animalType>
    </pet>
    <pet>
      <name>
        wuffles
      </name>
    </pet
  </person>
</people>

So, what we have here is an element called people. This contains 2 person elements with personID attributes attached to them. These both contain firstName and lastName elements, but personID 124 also has a middleName and two pet elements. These pet elements each hold a name element, but only fluffles has an animalType element.

In this way, we can use XML to hold only the data we actually know. Unless we have specified an XML Schema (outside the scope of this class) there is no validation on what elements or attributes an element should contain.

Retrieving values

Often when we send service broker messages, the message body will contain XML. In these cases, the activation stored procedure often needs to extract the values from the constituent elements in order to act appropriately. SQL Server provides 5 methods you can call from an XML object to extract specific information, as part of the XQuery functionality. We will start with the value method:

DECLARE @XML XML = '<people><person id="5"><firstName>Larry</firstName><lastName>Smith</lastName></person></people>'


SELECT @XML.value('(/people/person/firstName)[1]', 'VARCHAR(100)');
SELECT @XML.value('(/people/person[@id="5"][1]/firstName)[1]', 'VARCHAR(100)');
SELECT @XML.value('/people[1]/person[1]/firstName[1]', 'VARCHAR(100)');

This returns the value “Larry”. There are a few things to note here:

  1. The value method is called as if the XML object is a class and the value is a method in an object oriented language.
  2. The path to the element you want to reach has to travel down form the outermost element, with each element starting with a “/”
  3. You must specify the ordinal number of the element you want to use. This is done by wrapping the path in brackets and specifying the ordinal at the end of the brackets e.g. the first query simply says return the value of the very first firstName, in the people>person>firstName hierarchy.
  4. You can optionally specify ordinals for any of the elements along the path, but can only do away with the round brackets and final ordinal if you specify other ordinals for every level along the way as in the third query. You can also filter at each element level based on attributes using [@attribute=”AttributeValue”] and can combine this with an element ordinal.
  5. You need to specify the data type you want to convert the value to as the second parameter, in this case I have specified VARCHAR(100).

This is fine if you want to essentially send one row of information, so in a situation when each message would contain one person’s information. It is even a possible approach if you know you will always send a fixed number of rows, so if each person would always have three pets attached to them. But what if you want to allow your customers to have a variable number of pets (a far more plausible scenario). That requires you to use the nodes method

Nodes

The nodes method is used to shred XML. Shredding is a term that basically means turn the XML into more of a table structure. A call to the nodes method includes a specified path through the XML, and returns a table containing one row per element at the level specified, with each row containing the whole XML. From there you can use the value method to navigate up and down from the specified element, and return the value(s) you want. For example:

DECLARE @XML XML = '<people><person id="4"><firstName>John</firstName><lastName>Smith</lastName></person><person id="5"><firstName>Larry</firstName><lastName>Smith</lastName></person></people>'


SELECT Person.Person.value('(./firstName)[1]' , 'VARCHAR(100)')
FROM @XML.nodes('/people/person') AS Person(Person);

SELECT Person.FirstName.value('.' , 'VARCHAR(100)')
FROM @XML.nodes('/people/person/firstName') AS Person(FirstName);

Note that in the first SELECT we still need to specify the ordinal position, this is because there may still be multiple firstName elements contained in the person element. In the second example, because we are already at the firstName element, we know that there is only one firstName at our level and do not need the ordinal.

If you ever want to travel up the XML path after using nodes, you can use the ‘.’ notation, ‘.’ means stay at this level but each additional ‘.’ takes you up a level.

This can be a bit tricky to get your head around, so if you are struggling it may be best to do some more reading or experimenting on your own. The basic idea is that the nodes function will return one row for each element at the level you specify. Each row will contain the entire XML, but with a pointer to that row’s element, and any further methods you call against the data will start their paths at that pointer. I’ve included one more example to try and make this make sense:

<people>
	<person id="4">
		<firstName>
			John
		</firstName>
		<lastName>
			Smith
		</lastName>
	</person>
	<person id="5">
		<firstName>
			Larry
		</firstName>
		<lastName>
			Smith
		</lastName>
	</person>
</people>
--Assuming the above XML is contained in @XML

SELECT --something
FROM @XML.nodes('/people/person/firstName') AS Person(FirstName);
  -- This will return a 2 row table. Each row contains the whole XML.
  -- One row's pointer starts at line 3
  -- One row's pointer starts at line 11
  -- To access this, the select needs to reference Person.FirstName and call an XQuery function like 'value'

SELECT --something
FROM @XML.nodes('/people/person') AS Person(Person);
  -- This will return a 2 row table. Each row contains the whole XML.
  -- One row's pointer starts at line 2
  -- One row's pointer starts at line 10
  -- To access this, the select needs to reference Person.Person and call an XQuery function like 'value'

That’s it for this lesson, next lesson will go through an example of how to put everything together in a working application.

Service Broker 101 Lesson 5: Stored procedures to handle messages

At this point we’ve set up our message types, our contracts, our services, and our queues. We’ve opened conversations and sent messages across them (but only initiator to target), and we’ve seen how to get those messages off the target queue. This lesson will go through the last step in the process, doing something with these messages.

Message handling from outside of the queue

This is the easier option, but it will probably cause you some issues down the line. This is where you let messages accumulate on the queue, and at set times run a stored procedure (or other statement) to do something with them. It might look something like this:

DECLARE @Messages AS TABLE
    (
          MessageType NVARCHAR(256) NOT NULL
        , MessageBody XML NULL
    );

RECEIVE
      message_type_name
    , CAST(message_body AS XML) AS XMLMessageBody
FROM dbo.TargetQueue
INTO @Messages;

INSERT INTO dbo.GoodMessages
    (
          MessageType
        , MessageBody
    )
SELECT
      Msg.MessageType
    , Msg.MessageBody
FROM @Messages AS Msg
WHERE 1 = 1
    AND Msg.MessageType IN
        (     'ServiceBrokerExample/Example1/MessageType/Outgoing'
            , 'ServiceBrokerExample/Example2/MessageType/Outgoing');

INSERT INTO dbo.ServiceBrokerErrors
    (
          MessageType
        , MessageBody
    )
SELECT
      Msg.MessageType
    , Msg.MessageBody
FROM @Messages AS Msg
WHERE 1 = 1
    AND Msg.MessageType LIKE 'ServiceBrokerExample/Example[0-9]/MessageType/Error';

This code will retrieve every message from the queue, and add it to one of the two tables depending on the message type. This will process the messages, and empty the queue, but what about all of the conversations that have been left open. We could execute an END CONVERSATION statement for each conversation_handle in the queue, but that will only close the conversation from the initiator side. We could END CONVERSATION WITH CLEANUP, but that’s a brute force approach like KILLing a connection,a nd should really be a last resort. In any case, we don’t know for sure which of these conversations we do want to end. If we are doing something more complex, maybe we want to keep a conversation going for some time.

Message handling using stored procedures as queue handlers

As discussed in the last lesson, you can attach a stored procedure to a queue when creating or altering it using the following syntax:

WITH
    , ACTIVATION
        (
              STATUS = ON
            , PROCEDURE_NAME = dbo.TargetQueueProcedure
            , MAX_QUEUE_READERS = 4
            , EXECUTE AS SELF
        )

This stored procedure is used by the queue to handle messages that arrive. When a message arrives the queue will execute the procedure over and over until there are no more messages on the queue. Your stored procedure therefore needs to be removing messages as it goes. You have the option to have multiple versions of the query executing at once, to clear down a queue faster or to keep up with a high volume of messages, using the MAX_QUEUE_READERS setting. You can turn the stored procedure on or off using the STATUS, while this is set to OFF nothing will happen but as soon as it is set to ON the query will start processing messages again. Finally you need to specify what user the query will execute under. The options here are SELF, as the current user (the person who runs the CREATE or ALTER script), OWNER, as the person who owns the queue, or a username that the current user has impersonate permissions for.

Below is an example of how you might set out a stored procedure you want to run as a queue handler:

DECLARE
      @ConversationHandle UNIQUEIDENTIFIER
    , @MessageType NVARCHAR(256)
    , @MessageBody XML
    , @ReceivedMessage BIT = 1

WHILE @ReceivedMessage = 1
BEGIN
    SET @ReceivedMessage = 0
    WAITFOR
        (
            RECEIVE TOP (1)
                  @ConversationHandle = conversation_handle
                , @MessageBody = message_body
                , @MessageType = message_type_name
                , @ReceivedMessage = 1
            FROM dbo.TargetQueue
        ), TIMEOUT 1000;

    IF @ReceivedMessage = 1
    BEGIN
        IF @MessageType IN
                ('ServiceBrokerExample/Example1/MessageType/Outgoing'
                , 'ServiceBrokerExample/Example2/MessageType/Outgoing')
            INSERT INTO dbo.GoodMessages
                (
                      MessageType
                    , MessageBody
                )
            VALUES (@MessageType, @MessageBody);
            
        IF @MessageType LIKE 'ServiceBrokerExample/Example[0-9]/MessageType/Error'
            INSERT INTO dbo.ServiceBrokerErrors
                (
                      MessageType
                    , MessageBody
                )
            VALUES (@MessageType, @MessageBody);
        
        END CONVERSATION @ConversationHandle;
    END
END

The WAITFOR, TIMEOUT 1000 that we wrap around the RECEIVE TOP(1) does exactly what you might expect, it waits 1000 seconds to see if we can retrieve a message from the queue, and breaks at either the 1000 second mark or as soon as a message is received. The WHILE @ReceviedMessage = 1 ensures that the query will keep executing as long as it finds messages, and will end as soon as it does not.

Once the message has been received we do something with it, in this case adding it to different tables depending on the message type, and then close the conversation. We could also send messages back if we wished, using the same conversation, and keep the conversation open if we expect to receive more messages from our initiator queue in response. We could even open more conversations and send messages on to further queues.

The initiator queue will need a similar query attached to it. At the simplest, this should just look for the Microsoft “http://schemas.microsoft.com/SQL/ServiceBroker/EndDialog&#8221; message type, and end the conversation from the initiator endpoint as well. It may also do other things, depending on how complex you want the conversation to be, but the most important thing is that any possible conversation path ends with both sides executing an END CONVERSATION eventually. Otherwise conversation endpoints will persist and cause performance issues down the line.

POISON_MESSAGE_HANDLING

POISON_MESSAGE_HANDLING is another queue-level property you can set with a CREATE or ALTER QUEUE statement. A poison message in the service broker context is a message that causes the activation stored procedure to roll back the transaction when it executes. This will roll back the RECEIVE statement and put the message back on the queue, then the activation stored procedure will activate, attempt to process it again, and roll back again. This can cause an infinite loop, and potentially block up the queue or at the very least consume resources.

One way round this is to not include transactions and rollbacks in your query. This might be ok depending on how you handle the error, but what if the error handling also fails? Also, what if the failure is a temporary connection problem or something similar, ideally then you do want to roll back and try again.

The POISON_MESSAGE_HANDLING setting decides what the queue does if you are in this situation. The default is to set it to ON, in which case the queue will disable (STATUS = OFF) after 5 consecutive ROLLBACKs. If you set it to OFF, the queue will keep on trying to execute the stored procedure forever so if you are going to disable this you need to be confident your activation stored procedure can handle this scenario.

That’s all for now, next time the lesson will be a brief look at how you can extract data from the XML of a message.

Service Broker 101 Lesson 4: All about Queues

So far, this series has covered what service broker is, the different components that make it up, and how we use these to open a conversation and send a message to a queue. We’ve talked about queues quite a bit in these sessions, but never really gone into detail about what they are, so that is what this lesson is all about.

Retrieving data from Queues

A queue is a full database object, like a table or a stored procedure. As such, it is part of a schema, and appears in the sys.objects view. A queue holds messages that have been sent to it, in the same way that a table does, and these messages can even be queried in the same way that you would query a table.

You can’t change the columns that are available, and there are quite a few of them. To see what there is, just run SELECT * against any queue, but a few of the key ones are service_name, service_contract_name, message_type_name, message_body, message_enqueue_time, conversation_handle.

You can also retrieve data from a queue using a RECEIVE statement. These work very much like SELECTs, except that they remove the received message from the queue. This is how you would typically process messages, to make sure that the queue does not grow too large. Also note that the RECEIVE statement, like a CTE, needs the previous statement to terminate with a ;

RECEIVE TOP (1)
      priority
    , conversation_handle
    , message_type_name
    , CAST(message_body AS XML) AS XMLMessageBody
    , message_body
FROM dbo.TargetQueue

As you can see, you need to convert the message_body to XML or a string data type for it to make any sense. When you specify a TOP (X) in your RECEIVE statement, you will get the top X messages in the order they arrived (oldest first). You can also filter RECEIVE statements, but only on conversation_handle or conversation_group_id.

Creating queues

Queues can be a bit more complex to create, and there’s a few things you need to be aware of. The basic queue creation is pretty simple

CREATE QUEUE dbo.TargetQueue

But there are a few more options than we’ve seen for the other components we’ve created. Note that queues cannot use the CREATE OR ALTER script that was brought in for SQL 2016.

CREATE QUEUE dbo.TargetQueue
WITH
      STATUS = ON
    , RETENTION = OFF
    , ACTIVATION
        (
              STATUS = ON
            , PROCEDURE_NAME = dbo.TargetQueueProcedure
            , MAX_QUEUE_READERS = 4
            , EXECUTE AS SELF
        )
    , POISON_MESSAGE_HANDLING (STATUS = ON)

Everything in the WITH block can be changed with an ALTER QUEUE statement.

STATUS – This effectively says if the queue is active or not, so whether it can send and receive messages or not. The default is ON
RETENTION – If this is on, messages on the queue are kept until the end of a conversation regardless of if they have been RECEIVEd or not. It defaults to OFF
ACTIVATION – This allows you to attach a stored procedure to the queue, that will handle messages as they arrive. This is the subject of the next lesson, so I won’t say anything more about it here.
POISON_MESSAGE_HANDLING – This relates to the ACTIVATION section, so I will cover it next lesson.

All of these settings can be changed once the queue is created, using ALTER QUEUE. That’s pretty much all there is to queues, next lesson will dive deeper into the stored procedures we attach to them and how they work.

Service Broker 101 Lesson 3: Conversations and messages

The last post went through the basics of the different components Service Broker uses in SQL Server. I talked about message types, queues that send and receive messages, contracts that specify particular message types to be sent to the target or initiator queue, and services that attach to queues and specify the contracts that can target that queue.

This lesson is all about how we fit these things together to open a conversation and send a message. I’ve seen some other posts that rush through this so I want to take my time with this and go through it step by step.

The first thing we need is an open conversation. This is generated with the BEGIN DIALOG CONVERSATION statement. Oddly, the CONVERSATION part is redundant, despite the fact that every other command refers to these items as conversations, not dialogues. Anyway, the command to open a simple conversation is as follows:

DECLARE @ConversationHandle UNIQUEIDENTIFIER;

BEGIN DIALOG CONVERSATION @ConversationHandle
    FROM SERVICE [ServiceBrokerExample/Example1/Service/ServiceSource]
    TO SERVICE 'ServiceBrokerExample/Example1/Service/ServiceTarget'
    ON CONTRACT [ServiceBrokerExample/Example1/Contract/Complicated]

There are a few key things here.

First, each conversation has its own unique dialog_handle (not conversation handle, despite everything calling it a conversation from now on, score one for consistency Microsoft). We need to capture this handle in a UNIQUEIDENTIFIER variable, as we will need it later on to send messages across the conversation. In fact, the statement will error if you don’t supply a variable to capture the handle.

Second, we need to supply both FROM and TO services. These tell the conversation which service is the source and which is the target. Remember, each service is attached to a queue, and can have one or more contracts attached to it. The source service is a database object, but the target service is an NVARCHAR. This allows the target service to live outside the database, which is something that I will cover at some point in the Service Broker 201 series.

Finally we need to give the contract the conversation will obey. This contract details what message types can be sent by the initiator and target services, and it has to be one of the contracts allowed by the target service.

At this point we have a conversation, and we can start sending messages, the code to do that is below:

SEND
    ON CONVERSATION @ConversationHandle
    MESSAGE TYPE [ServiceBrokerExample/Example1/MessageType/Reply]
        ('<message>hello world</message>')

This sends the specified message type across the specified conversation from the initiator to the target. You can also specify the message you want to send in brackets after the message type, but this is not necessary unless the message type forces you to. Sometimes just having a message of a specified type is all the information you need.

Note that the initiator to target rule applies to any code executed in a standard session connected to the database that holds the initiating service. Retrieving a message from the queue, and sending messages from the target to the initiator will be covered in the next session about queues.

The last thing we need to do is close our conversation. To do this we call the END CONVERSATION @ConversationHandle statement. This sends a message using the inbuilt Microsoft message type http://schemas.microsoft.com/SQL/ServiceBroker/EndDialog and closes the conversation from one end. To fully close the conversation you need to also call END CONVERSATION from the other end.

It’s important to always fully close conversations from both ends, otherwise they will hang around eating resources in your database. They may not be much individually, but over time they can add up significantly. A conversation does have a lifetime, specified in seconds, and you can set the lifetime when you create it, but if you don’t the default is 2,147,483,647 (max value of an INT) which is roughly 68 years.

That’s it for now, hopefully this stuff is starting to come together. The nest session will be all about Queues, how we retrieve these messages from them, how we process the messages once we have them, and how we can send messages back from our target to the initiator.

Custom backgrounds in Microsoft Teams

Just a quick blog post today, taking you through how to add a custom background to Microsoft Teams.

Instead of doing something through Teams itself, you need to copy the custom image to the C:\Users\YOURUSERNAME\AppData\Roaming\Microsoft\Teams\Backgrounds\Uploads folder, obviously replacing YOURUSERNAME with your actual username. Note that if you try to navigate to this folder in file explorer, AppData is a hidden folder so you need to click on View and the Hidden Items checkbox before you can see it.

Once you have added the image to that folder, you can select it in Teams by clicking on the ellipsis during a call and selecting Show Background Effects. You should see a number of possible images, with your custom images at the end.

Service Broker 101 Lesson 2: Service Broker components and how they fit together

Last lesson I gave a very quick introduction to Service Broker, and outlined a couple of scenarios where it might be useful. This time I want to talk through the different components that make up Service Broker, how they fit together, and what part they play in sending and managing messages.

There are 4 main components that Service Broker needs you to create in SQL Server:

Message Type

Any message you send will have two components, the message type and the message. The CREATE MESSAGE TYPE syntax allows you to specify a message type, and what constitutes a valid message for that type.

CREATE MESSAGE TYPE [SBE/Example1/MessageType/Outgoing]
    VALIDATION = WELL_FORMED_XML;

In this case, we create a message type of ServiceBrokerExample/Example1/MessageType/Outgoing, that takes a well-formed XML message. Other values for VALIDATION are NONE, where the message can contain anything or be NULL; EMPTY, where there is no message; or VALID_XML WITH SCHEMA COLLECTION schema_collection_name, where the message has to be XML that conforms to the specified schema collection, which must already exist.

SQL Server already has a collection of message types it uses to signal various events, you can find them in the sys.service_message_types.

Queues

Queues are the most similar to SQL objects we may be used to. They are also the only Service Broker object to appear in the sys.objects table, and to be owned by schemas. There is also quite a bit to say about them, so I will largely leave them to Lesson 4.

Contracts

Contracts define the types of conversation that can be had between Queues. Specifically, each contract defines the message types that can be sent by the initiator and target queues. Each message type included in the contract can be sent by the initiator, the target, or both.

CREATE CONTRACT [SBE/Example1/Contract/Complicated]
    (
          [SBE/Example1/MessageType/Outgoing] SENT BY INITIATOR
        , [SBE/Example1/MessageType/Reply] SENT BY TARGET
        , [SBE/Example1/MessageType/Alert] SENT BY TARGET
        , [SBE/Example1/MessageType/Error] SENT BY ANY
    )

In this example, once a conversation has been opened under this contract, the initiator queue (the queue that sent the first message) can send Outgoing or Error message types, and the target queue can send reply, alert, and error messages. Note that once a conversation begins the initiator and target roles are locked for the purposes of that conversation.

Service

A service is the way a conversation connects to a queue. Each service sits above a single queue, although a queue can have multiple services. In order to be the target of a conversation, a service must also specify at least one conversation that can be used to target it.

CREATE SERVICE [SBE/Example1/Service/ServiceTarget]
    ON QUEUE dbo.TargetQueue
    (
          [SBE/Example1/Contract/Complicated]
        , [SBE/Example1/Contract/Emergency]
    )

CREATE SERVICE [SBE/Example1/Service/ServiceSource]
    ON QUEUE dbo.SourceQueue

CREATE SERVICE [SBE/Example1/Service/ServiceEmergencySource]
    ON QUEUE dbo.EmergencySourceQueue

In this example, the ServiceTarget service allows the TargetQueue to be the target of conversations using the Complicated or Emergency contracts. The ServiceSource service allows the SourceQueue to initiate conversations but not be the target of them, and the ServiceEmergencySource service does the same for the EmergencySourceQueue.

That’s it for this lesson, I know this is probably a little confusing at the moment but next lesson I’ll take us through opening a conversation and sending a message, and at that point things should get a little clearer.

Service Broker 101 Lesson 1: What is Service Broker?

If you don’t really know what Service Broker is, you’re not alone. I had probably heard the term a couple of times in my 14 years as a SQL developer, but had never come across anyone using it until I started my latest job. Even then, I only discovered it when I imported a database into an SSDT database project, and saw a Queue object appear.

I did a little investigation after that; and it seemed an interesting, if little used, piece of functionality. I didn’t really think anything more of it, but filed it away in the bit of my brain that stores marked “investigate someday” (that part of my brain gets pretty cluttered and seldom cleared out).

Then, recently, I had an issue where Service Broker seemed the perfect solution, so spent a weekend experimenting, coded the fix using Service Broker, and that release is making it’s way through UAT at the moment.

But what is Service Broker?

I can hear you thinking “yeah, yeah, get to the point already”, so I will.

Service Broker is a queueing system within SQL Server. It allows you to send a message from one queue to another, and handle what happens when that message arrives. That’s pretty much it.

So, why would I want that?

Well, that’s where the story I told you at the start comes in. The issue we had was that out client wanted to run a process during the day that usually gets run at night. This process is pretty long, and locks up some tables for a minute or more, including some tables it locks through an indexed view (that’s a whole other issue that I’ll maybe blog about some other day). At the same time, users are logging onto the application to do various things, including downloading one-use vouchers. The stored proc that sits behind that API reads the data ok, but wants to write the record that these vouchers have been viewed to one of the locked tables.

What I’ve done is shift the write to table from the stored procedure to a queue. Now when a user requests their vouchers the system selects them from the right table, and fires off a message to the queue with all the details of the vouchers they just viewed, and the queue adds them to the table whenever it has a chance.

So, that’s scenario #1, when you have a process that needs to do something but doesn’t need to do it immediately, you can split that part of the process out and use a queue to have it happen as soon as possible, allowing your main process to complete quicker. Typically this will be logging that something has happened, where you need to return the result of the thing immediately but the logging table might be blocked by either a big read/write, or lots of small reads/writes.

Scenario #2 is when you are running something like the ETL systems I’ve seen and heard getting built more and more. These systems work off queues in an case, typically built as tables, where you have an overarching system that dynamically decides what happens and in what order.

As an example, you start by importing 3 files, so you add those imports to the queue and they all start executing. Once the first one is finished, you add a validation for that to the queue and that starts processing. File 2 finishes importing but that’s flagged as not needing validation so you check what data loads you can run with just that file and add them to the queue. File 3 takes a long time to load, and File 1 finishes validation in that time but there’s nothing that can be populated from File 1 until File 3 is finished loading so nothing gets added to the queue at that point.

If you have a system that wants to work like that, where you are executing a set of modules in a dynamic order, then Service Broker may be useful as a way of managing that order.

I was going to post a bit more here about the different components that make up Service Broker, but I’ve gone on for longer than I expected just on this, so I think I’ll leave that for the next post.

Lessons

I’m trying something new with my blog post this week. I want to start doing different series on subjects I think people would benefit from a deep dive into. I’m starting with Service Broker, a topic I knew nothing about until a few months ago. Other possible topics inclued:

  • Different components of SQL Server
  • SSAS
  • MDS
  • DQS
  • SQLCLR

Some of these I know quite a bit about, others I don’t have much of a clue at the moment and that’s part of the reason I want to write about them, to force myself to learn. These posts will be titled something like [topic] 101 Lesson 1: [lesson subject]. Depending on how this goes I might add some 201 or 301 series in the future, but for now the idea is to assume no knowledge from the reader and try to get to a point where they can not only (in this case) code a simple service broker solution, but also understand what they are doing.

Anyway, the first service broker post goes up today. Enjoy.