Categories: "Tracing & Monitoring"
How to import Extended Events session file target and parse deadlock-graph / Import von Extended Events Ereignisdateien und Parsen von Deadlock-Graphen
Mar 31st
(DE) |
(EN) |
Deadlocks, die man hier erläutert finden kann, treten zwischen mindestens 2 Arbeits-Threads auf, oft aber sind mehr als 2 beteiligt. |
Deadlock, explained here, occur between at least 2 threads but often more than 2 are involved. |
Um Deadlocks zu Tracen gab es vor SQL Server 2008 nur die folgenden Möglichkeiten: 1. Eine SQL Trace mit Profiler oder auch ohne GUI mit Protokollierung in trc-Datei oder live in der GUI 2. Trace Flags 1204, 1205 oder 1222, welche in das Errorlog des SQL Servers protokollieren – „wunderbar“ besonders auf „ereignisreichen“ Servern, und denkbar schwierig für die Analyse. Manch einer mag sich noch an diese grausigen Einträge im Error-Log erinnern: |
For tracing deadlocks before SQL Server 2008 there were merely the following ways: 1. A SQL Trace with Profiler or without GUI with logging in a trace-file or live inside the GUI 2. Trace Flags 1204, 1205 or 1222, which log into SQL Server’s errorlog – „great“ especially on “eventful” servers. Some may remember those horrible entries in the error log: |
3. Event Notifications, welche die Daten per Service Broker Queue in XML-Format zur Verfügung stellen. 4. Ein WMI-Alert, welcher die Daten ebenfalls per XML bereitstellt und mit einem Auftrag oder einer Benachrichtigung gekoppelt diesen protokollieren bzw. darüber informieren kann. |
3. Event Notifications which provide the data via Service Broker Queue in XML-format. 4. A WMI-Alert which also provides the data via XML und tied to a job can also log respectively inform about it. |
All diese Ansätze, so unterschiedliche Vorteile sie auch haben mögen haben eines gemeinsam: Sie müssen erst eingerichtet werden. Im Zweifelsfalle also nachdem die ersten Deadlocks aufgetreten sind. Seit SQL Server 2008 jedoch werden sämtliche Deadlocks von Hause aus protokolliert. Das geschieht durch die automatisch laufende system_health Extended Event session. Diese speichert neben Deadlocks noch weitere wichtige Server-Ereignis- und Fehler-Informationen ab. Hier finden sich Details dazu. Seit SQL Server 2012 protokolliert die system_health session neben dem Memory-Ziel „Ring_Buffer“ auch in die Ereignisdatei, womit Deadlocks und weitere Fehler auch nach Serverneustart noch eine ganze Weile (bis zu mehreren Tagen) zur Verfügung stehen – bis der Rollover der 5 Dateien stattgefunden hat. Für SQL Server 2008 Instanzen empfehle ich, das Datei-Ziel hinzuzufügen. |
All those approaches, as different as their respective advantages may, have ine thing in common: They have to be set up. In case of doubt after the first deadlocks occurred. But since SQL Server 2008 all deadlocks are being logged by default. This is done by the automatically running system_health Extended Event session. This, alongside deadlocks logs further important server events and error-information. Here you find details on it. Since SQL Server 2012 the system_health session apart from the memory-target “ring_buffer” also logs into an event-file, so that deadlocks and other error stay available even after a server restart for quite a while (up to a few days) – until the rollover of the 5 files has occurred. For SQL Server 2008 instances I recommend to add the file-target.
|
Wie kann man diese Daten auslesen? |
How can you read-out that data? |
SELECT
target_data
FROM
sys.dm_xe_session_targets AS dm_xe_session_targets
INNER JOIN sys.dm_xe_sessions AS dm_xe_sessions
ON dm_xe_session_targets.event_session_address = dm_xe_sessions.address
WHERE
dm_xe_sessions.name = 'system_health'
AND target_name = 'ring_buffer'
„Target_data“ ist XML-text gespeichert als unicode-text-Blob, welches man bei entsprechender Konvertierung zu XML sogar in SSMS direkt öffnen kann. Das sieht dann vom Code her so aus: |
„Target_data“ is XML-text stored as unicode-text-blob which can even be opened direct in SSMS after converting to XML. |
Wenn man einen solchen Deadlock-Report in XML-Format in SSMS anklickt, bekommt man ihn in einem neuen Fenster als XML-Dokument angezeigt: |
When klicking at one of those Deadlock-Report reports in XML-format in SSMS it will be opened in a new window as XML-document: |
- im Bild habe ich zur Übersicht einige Knoten zugeklappt. Wichtig hier sind: die victim-list, welche bei einem Multi-Victim-Deadlock dann je Victim eine Prozess-Id enthält, die man wiederum in der process-list näher untersuchen kann. Im executionStack findet man dann denn ausgeführten SQL-Befehl und ggfl. sogar direkt den Prozedurnamen. Darunter folgt in der resource-list eine Liste der beteiligten (gesperrten) Ressourcen. Sämtliche Nodes sind mit Details wie der Zeilennummer, Objektname, Lock-Modus, Wartezeit, SPID und natürlich Zeitstempeln gespickt. |
- in the picture I collapsed some nodes for brevity. Most important here are: the victim-list, which for a multi-victim-deadlock contains one process-id per victim, which again one can investigate more deeply in the process-list. In the executionStack one can find the executes SQL-command and if applicable even the procedure name. Below in the resource-list follows a list of all involved (blocked) resources. All nodes are peppered with details such as row number, objectname, lock-mode, waiting time, spid and of course timestamps. |
Wer hier direkt weiter ansetzen möchte, kann das XML natürlich weiter auseinandernehmen und einzelne Knoten versuchen relational in Spaltenform darzustellen. |
Who wants to start from here can of course take apart the XML and try to break individual nodes relationally into columns. |
CAST(XEventData.XEvent.query('(data/value/deadlock)[1]') AS xml) AS DeadlockGraph
Von dort aus kann man leicht die einzelnen Knoten parsen. |
From there one can parse the individual nodes easily. |
XMLCol.value('(/deadlock/process-list/process)[1]/@isolationlevel', 'varchar(100)')
Auf die Weise kann man mit fast allen Attributen und Elementen verfahren. |
In this manner one can proceed with almost all attributes and elements. |
Das zweite Problem tritt dann auf, wenn man Multi-Victim-Deadlocks vor sich hat. Potentiell kann es ja eine Unmenge an gesperrten und gekillten Victim-Prozessen geben, so dass man da kaum ein immer ausreichendes Schema aufbauen kann. Hier ein Beispiel eines Multi-Victim-Deadlocks, dargestellt mit der XEvents GUI (Profiler beherrschte das, wie so vieles, nicht) |
The second problem arises once you are dealing with a multi-victim-deadlock. Potentially there can be a vast number of locked and killed victim-processes, so that one can hardly prepare an ever sufficient schema. Here an example of a multi-victim-deadlock, displayed with the XEvents GUI (Profiler could not handle that as many other things) |
Letzten Monat habe ich meinen SQL Deadlock-Collector & Parser bei codeplex frei downloadbar veröffentlicht. - Hier zum Artikel mit ausführlicher Beschreibung: freier Deadlock-Sammler & -Parser basierend auf Erweiterten Ereignissen Dort habe ich mich so entschieden, immer die beiden ersten Prozesse gegenüberzustellen, was bei den meisten Deadlocks schon sehr weiterhilft. Das sieht dann so aus: |
Last month I published my SQL Deadlock-Collector & Parser freely downloadable at codeplex. - here the article with detailed explanaition: free Deadlock-Collector & -Parser based on Extended Events There I decided always contrast the first two processes , which helps for most deadlocks a lot already. This is how it looks like: |
Wie kann man nun die ja ebenfalls von er system_health-session geschriebenen Event-Dateien einlesen? Dafür gibt es die Systemfunktion sys.fn_xe_file_target_read_file. So kann das dann aussehen: |
How is it possible to read-in the eventfiles which are written to as well by the system_health-session? For that the system-function sys.fn_xe_file_target_read_file is a provided. This is how that can look like: |
Wer bereits den Deadlock-Collector zum Parsen und Speichern seiner Deadlocks verwendet, kann damit ebenfalls manuell Deadlocks von X-beliebigen Systemen in die SQL_Analysis_Data-Datenbank importieren. Dieser ist standardmäßig auf die Verwendung des Ring_Buffers geschaltet. |
Who is already using the Deadlock-Collector to parse and save ones deadlocks can also use it to import deadlocks from any system at will into the SQL_Analysis_Data-database. For that simply use the procedure Locking.ins_DeadLock in the database SQL_Analysis_Code with the parameter @XESource. It is set to use the ring_buffer by default. |
USE SQL_Analysis_Code;
DECLARE @DeadlocksCollected int;
EXECUTE Locking.ins_DeadLock
@XESource = 'D:\LogFiles\SQLServer\SQL2012Dev\system_health_*.xel'
, @DeadlocksCollected = @DeadlocksCollected OUTPUT;
SELECT @DeadlocksCollected AS [Number of new Deadlocks collected:];
Anschließend kann man die Deadlocks wie hier beschrieben über Sichten oder Ad-Hoc T-SQL bequem halbwegs in relationaler Struktur analysieren. |
Afterwards one can comfortably analyse the deadlocks via views or ad-hoc T-SQL in a half-way relational structure as wie described here. |
Hinweis: Es ist möglich, dass Deadlocks aufreten, die nicht in den system_health_session-Speicherzielen, also weder in Ring_Buffer noch in der Eventdate zu finden sind. Das geschieht, wenn der Deadlock-Graph zu groß für den Puffer-Speicher ist.Um sicherzustellen, dass auch größere Deadlocks gesammelt werden können, empfehle ich, den Puffer-Speicher der system_health-session zu erhöhen. |
Note: It is possible that deadlocks occur which do not appear in the system_health-session targets. Neither in the ring_buffer not in the filetarget. In order to make sure that larger deadlock graphs are also captured, I recommend to increase the maximal work memory of the system_health session. |
Und wer jetzt immer noch auf Profiler & SQL Trace setzt (seit SQL Server 2012 aus gutem Grunde deprecated), hier noch ein Tipp: |
And who now is still using Profiler & SQL trace (deprecated since SQL Server 2012 for good reasons), here a tip: |
Das sieht eher wie ein „Multi-Winner-Deadlock“ aus ;-) Und so stellt sich dieser in der XEvents GUI dar: |
Looks more like a „multi-winner-deadlock” ;-) And this is how the XEvents GUI displays it: |
HappyXEvent Parsing
Andreas
Free Deadlock-Collector & -Parser, based on Extended Events session system_health / freier Deadlock-Sammler & -Parser basierend auf Erweiterten Ereignissen
Feb 27th
(en) Until now I have only given it out to attendees of my SQL Server Master-Classes and a few fellow Microsoft Certified Masters. From today on it is freely available to everyone. |
(de) Bisher habe ich es nur an Teilnehmer meiner SQL Server Master-Classes und einige Microsoft Certified Master-Kollegen herausgegeben. Ab heute ist es für jeden frei verfügbar. |
If you just want to get it real quick, here is the link to the Codeplex project where you can download the necessary objects: >> sqldeadlockcollector.codeplex.com << For more information on how to set up the solution and what kind of analysis it enables read below. |
Wenn Sie ihn einfach schnell haben möchten, ist hier der Link zum Codeplex-Projekt, wo Sie die nötigen Objekte herunterladen können: >> sqldeadlockcollector.codeplex.com << Weitere Informationen dazu, wie man die Lösung aufstellt und welche Art von Analyse sie ermöglicht, finden Sie im Folgenden. |
Contents
|
Inhalte
|
Description The SQL Server Deadlock-Collector collects information which can be read out from the ring buffer in the memory of the SQL Server, and stores this information in a database-table. Aloof from the pure deadlock graph, the data is parsed, and the first 2 respective processes of the deadlock graph are contrasted in 2 columns, respectively, with information such as “process ID, SQL/procedure text, host name, application name” etc. (procedure name is only available from SQL Server 2012 onwards because of incomplete xml in the former version). |
Beschreibung Der SQL Server Deadlock-Collector sammelt Informationen, die aus dem Ring-Buffer im Speicher des SQL Server ausgelesen werden können und speichert diese Informationen in einer Datenbanktabelle. Abseits vom reinen Deadlock-Graphen werden die Daten zerlegt und die ersten 2 jeweiligen Prozesse des Deadlock-Graphs werden in jeweils 2 Spalten gegenübergestellt, mit Informationen wie z.B. „Prozess-ID, SQL/Prozedur Text, Hostname, Anwendungsname“ etc. (Prozedur-Name ist aufgrund unvollständigem xml in der vorherigen Version erst ab SQL Server 2012 verfügbar). |
This way, one obtains a tabular representation of the deadlock processes. Since there may be more than 2 involved processes, the complete graph is always also present. (red in the image below) |
Auf diese Weise erhält man eine tabellarische Darstellung der Deadlock-Prozesse. Da es mehr als 2 beteiligte Prozesse geben kann, ist der komplette Graph auch immer präsent. (rot in der Abbildung unten) |
With the aid of a few sample reports, administrators will obtain a quick overview on the frequency of the occurring deadlocks in the respective databases and be able to pass on the data for further analysis to the developers or external analysis even by simply providing a copy/backup of the database. Developers will thus obtain a valuable and quickly interpretable version of the occurring deadlock combination and be able to implement prevention strategies with this information. |
Mithilfe einiger Beispielberichte erhalten Administratoren eine schnelle Übersicht über die Häufigkeit der auftretenden Deadlocks in den jeweiligen Datenbanken, und können die Daten darüber sogar durch das einfache Bereitstellen einer Kopie/eines Backups der Datenbank an die Entwickler oder externe Analysten weitergeben. Entwickler erhalten damit eine wertvolle und schnell interpretierbare Version der auftretenden Deadlock-Kombinationen, und können mit diesen Informationen Vermeidungsstrategien umsetzen. |
Technical Implementation The complete SQL Deadlock Collector & Parser -solution consists of 3 (small) databases, a SQL Server Agent job and 3 example Reporting Services Reports for Management Studio integration. Through the utilization of the information already existing in memory by the Extended Events system_health-session, the SQL Deadlock Collector & Parser is absolutely lightweight and has scarcely any impact on a productive system. Solely at execution time of the data collection job (which is configurable at free will, every 6-24 hours are common), minimal resources are required. |
Technische Umsetzung Die komplette SQL Deadlock Collector & Parser – Lösung besteht aus 3 (kleinen) Datenbanken, einem SQL Server Agent Auftrag und 3 Beispiel- Reporting Services Berichten Management Studio Integration. Durch die Nutzung der durch die system_health-Extended Events Session bereits im Arbeitsspeicher existierenden Informationen ist der SQL Deadlock Collector & Parser absolut leichtgewichtig und hat kaum eine Auswirkung auf ein produktives System. Lediglich zur Ausführungszeit des Datensammlungsauftrags (der frei konfigurierbar ist, alle 6-24 Stunden sind dabei gängig) werden minimale Ressourcen erfordert. |
Supported SQL Server Versions & Editions The SQL Deadlock Collector & Parser is designed to run on all SQL Server versions and editions from SQL Server 2008 SP2 onwards. The provided example reports for Management Studio Integration only work with Management Studio 2012 onwards. Since SSMS is available in a free Express Edition, being mostly backwards compatible to SQL Server 2008 / R2, this is a minor constraint. Here you can get SSMS Express in Version 2014: www.microsoft.com/en-US/download/details.aspx?id=42299
|
Unterstützte SQL Server Versionen & Editionen Der SQL Deadlock Collector & Parser ist so ausgelegt, dass er auf allen SQL Server Versionen und Ausgaben ab SQL Server 2008 SP2 läuft. Die bereitgestellten Beispielberichte für Management Studio Integration funktionieren nur mit Management Studio 2012 und höher. Da SSMS als kostenlose Express Edition verfügbar ist, und dabei rückwärts kompatibel mit SQL Server 2008 / R2 ist, ist dies eine geringe Einschränkung. Hier können Sie die SSMS Express als Version 2014 erhalten: |
Setup High-level The SQL Deadlock Collector & Parser is available in the form of backup files of the respective databases which are 3 in total:
As can be seen by means of the denominations, in this relatively small project, too, it is being proceeded according to proven Sarpedon Quality Lab methodology of separating data, code and reporting. Thus, updates are being facilitated. The 3 databases must be set up on the same server. |
Setup High-level Der SQL Deadlock Collector & Parser ist in Form von Backup-Dateien der jeweiligen Datenbanken verfügbar, insgesamt 3:
Wie anhand der Bezeichnungen zu erkennen ist, wird auch in diesem relativ kleinen Projekt nach bewährter Sarpedon Quality Lab Methodik der Trennung von Daten, Code und Reporting vorgegangen. So werden Updates erleichtert. Die 3 Datenbanken müssen auf demselben Server eingerichtet werden. |
Setup instructions:
If multiple Instances are to be monitored, repeat each step for each instance |
Setup Anweisungen:
Wenn mehrere Instanzen überwacht werden sollen, wiederholen Sie alle Schritte je Instanz |
Setup the Jobs Subsequently, two SQL Server Agent Jobs should be installed which will be configured as follows: |
Einrichten der Jobs Danach sollten zwei SQL Server Agent Aufträge eingerichtet werden, die wie folgt konfiguriert werden: |
1)
Name: SQL_Analysis_Collect_Deadlocks
Description: Job that collects Deadlock-Graphs from system_health to store in SQL_Analysis_Data.
Step 1, Type T-SQL, Name: Exec Proc Locking ins_DeadLock
Database context: SQL_Analysis_Code
Command: DECLARE @DeadlocksCollected int;
EXECUTE Locking.ins_DeadLock
@DeadlocksCollected = @DeadlocksCollected OUTPUT;
SELECT @DeadlocksCollected AS [Number of new Deadlocks collected:];
2)
Name: SQL_Analysis_Remove_Old_Deadlocks
Description: Job that removes collected Deadlocks older than X days from SQL_Analysis_Data
Step 1, Type T-SQL, Name: Exec Proc Locking del_DeadLock
Database context: SQL_Analysis_Code
Command: DECLARE @DeleteOlderThanDate date, @DeadlocksRemoved int;
SET @DeleteOlderThanDate = DATEADD(dd, -60, SYSDATETIME())
SELECT @DeleteOlderThanDate AS [LatestDateToKeep:]
Execute Locking.del_DeadLock
@DeleteOlderThanDateX = @DeleteOlderThanDate
, @DeadlocksRemoved = @DeadlocksRemoved OUTPUT;
SELECT @DeadlocksRemoved AS [Number of removed deadlocks/rows from table:];
Please note that the instructions in bold must be used exactly as stated above; all others are suggestions and can be changed to your according naming styles. In addition to that, set up a time schedule (as always I recommend a dedicated but shared schedule). For instance, every 6-12 hours, depending on the number of errors in the system_health session which works in FIFO-mode. |
Bitte beachten Sie, dass die fettgedruckten Anweisungen genau wie oben angegeben verwendet werden müssen; alle anderen sind Vorschläge und können Ihren Namensrichtlinien angepasst werden. Stellen Sie zusätzlich dazu einen Zeitplan (ich empfehle wie immer einen dedizierten aber Shared Schedule). Zum Beispiel alle 6-12 Stunden, je nach Anzahl der Fehler in der system_health Session, die im FIFO-Modus arbeitet. |
After installing the databases the first time, you will find a few rows of example deadlocks in the Date-Database – also accessible via the views from the Reporting-Database. |
Nachdem Sie die Datenbanken das erste Mal eingerichtet haben, werden Sie einige Zeilen an Beispiel-Deadlocks in der Data-Datenbank finden – ebenfalls erreichbar über die Sichten aus der Reporting-Datenbank. |
You can delete those manually or by using the maintenance-procedure Locking.del_DeadLock as shown in the SQL Agent job-example. |
Sie können diese manuell oder mithilfe der Wartungs-Prozedur Locking.del_DeadLock, wie im SQL-Agent Auftragsbeispiel gezeigt, löschen. |
Integration of Reports In SQL Server management Studio navigate to the database “SQL_Analysis_Reporting” and there in the menu go to Reports – Custom reports, to choose the 3 provided rdl-files (separate download) for integration. |
Integration der Berichte Navigieren Sie im SQL Server Management Studio zur Datenbank “SQL_Analysis_Reporting” und dort im Menü zu Berichte – Benutzerdefinierte Berichte, um die 3 bereitgestellten rdl-Dateien (gesonderter Download) zur Integration auszuwählen. |
When the warning pops up you need to say “run” in order to execute the reports. |
Wenn die Warnung eingeblendet wird, müssen Sie „Ausführen“ auswählen, um die Berichte auszuführen. |
After that the 3 reports will show up in the list. For more information on Custom Reports in Management Studio read here: Custom Reports in Management Studio |
Danach werden die 3 Berichte in der Liste angezeigt. Für mehr Informationen zu benutzerdefinierten Berichten in Management Studio können Sie hier weiterlesen: Custom Reports in Management Studio |
Usage for Analysis When deadlocks are collected, these can be analyzed in different ways with the help of sample views located in the database SQL_Analysis_Reporting. These range from general statistics down to detailed statements, resources and execution plans. |
Verwendung zur Analyse Wenn Deadlocks gesammelt werden, können diese auf verschiedenen Wegen mithilfe der Beispielsichten in der Datenbank SQL_Analysis_Reporting analysiert werden. Diese reichen von allgemeinen Statistiken bis zu detaillierten Statements, Ressourcen und Ausführungsplänen. |
A special feature is the inclusion of the original query plans (provided that they are available in the cache at the moment of collection) of the first 2 involved processes – very handy for analysis of cause. |
Ein besonderes Feature ist die Einbeziehung des originalen Ausführungsplanes (vorausgesetzt, dass diese zum Zeitpunkt der Sammlung im Cache verfügbar sind) der ersten beiden involvierten Prozesse – sehr praktisch bei der Ursachenanalyse. |
A SQLHandle of 0xFF will be shown when the handle from the deadlock-graph cannot be resolved any more. This happens for trivial plans or when the plan has been evicted from cache since the deadlock occurred. The complete deadlock graph can also be used to get a visual representation of the deadlock graph. |
Ein SQLHandle von 0xFF wird angezeigt, wenn das Handle aus dem Deadlock-Graphen nicht mehr aufgelöst werden kann. Das geschieht bei trivialen Plänen oder wenn der Plan seit Auftreten des Deadlock aus dem Cache geworfen wurde. |
Example of a simple deadlock with 2 processes. |
Beispiel eines einfachen Deadlocks mit 2 Prozessen. |
Example of a multi-victim-deadlock with 4 processes. |
Beispiel eines Multi-Victim-Deadlocks mit 4 Prozessen. |
Clicking on the victim-list for a deadlock with 4 victims opens a new xml-window containing just the node of victims with their respective process-id for further, manual matching with the execution stack, for example |
Klickt man auf die Victim-Liste eines Deadlocks mit 4 Victims öffnet sich ein neues xml-Fenster, welches nur den Knoten der Victims mit ihrer jeweiligen Prozess-ID für weiteres manuelles Verknüpfen mit zum Beispiel dem Execution Stack. |
<victim-list>
<victimProcess id="process2766e188" />
<victimProcess id="process3a72ba928" />
<victimProcess id="process27822928" />
<victimProcess id="process293d1868" />
</victim-list>
The Reports On top of the views one can implement custom reports for reoccurring analysis. 3 example reports are included in the project and can be integrated into Management Studio directly. (Version 2012 and 2014) |
Die Berichte Basierend auf den Sichten kann man nutzerdefinierte Berichte für wiederholte Analysen implementieren. 3 Beispielberichte sind in dem Projekt enthalten und können direkt in Management Studio integriert werden. (Version 2012 und 2014) |
Availability The SQL Deadlock Collector & Parser is available as open source project under Microsoft Public License (Ms-PL) at codeplex: |
Verfügbarkeit Der SQL Deadlock Collector & Parser ist als Open Source Project unter Microsoft Public License (Ms-PL) auf Codeplex verfügbar: |
Notes & Recommendations 1) The event-time in SQL Server 2008 may strongly deviate (hours and days!) due to a mistake in the XML-code of the system_health session. Please also see the following Connect Item: |
Hinweise & Empfehlungen 1) Die Event-Zeit unter SQL Server 2008 kann aufgrund eines Fehlers im XML-Code der system_health session stark abweichen (Stunden und Tage!). Siehe dazu das folgende Connect Item: |
http://connect.microsoft.com/SQLServer/feedback/details/649362/extended-event-session-system-health-diferent-timestamp-datetime-and-datetime-on-server-getdate
2) In order to make sure that larger deadlock graphs are also captured, I recommend increasing the maximal memory of the system_health session as one sees fit. 3) The size of the Data-database (SQL_Analysis_Data) may greatly expand with time. Use the clean-up procedure (Locking.del_DeadLock) inside the Code-database to remove old deadlock entries. |
2) Um sicherzustellen, dass auch größere Deadlock-Graphen erfasst werden, empfehle ich, den maximalen Speicher der system_health session nach eigenem Ermessen zu erhöhen. 3) Der Umfang der Data-Datenbank kann mit der Zeit stark wachsen. Verwenden Sie die Aufräum-Prozedur (Locking.del_DeadLock) in der Code-Datenbank, um alte Deadlock Einträge zu entfernen. |
Call to Action: Quick Poll I absolutely invite you to use my code free of charge and comment on any problems you may have or improvements you would like to see. All I would love for you to do is send me a high level statistics of what kind of deadlocks you see on your systems. Simply run the following query after you have collected new deadlocks, and send me the result, purely consisting of the Number of affected processes and number of deadlocks. |
Call to Action: Kurze Umfrage Ich lade Sie dazu ein, meinen Code kostenlos einzusetzen und Probleme, die Sie antreffen sollten zu kommentieren. Alles, worüber ich mich freuen würde, wäre, wenn Sie mir eine allgemeine Statistik darüber zusenden, welche Art von Deadlock Sie auf Ihren Systemen sehen. Dafür können Sie einfach, nachdem Sie neue Deadlocks gesammelt haben, die folgende Abfrage ausführen, und mir das Ergebnis zusenden, welches lediglich aus der Anzahl der betroffenen Prozesse und Anzahl Deadlocks besteht. |
SELECT
[AffectedProcesses]
, COUNT(*) AS [Number of Deadlocks]
FROM
[SQL_Analysis_Reporting].[Locking].[v_DeadlockInfos]
GROUP BY
[AffectedProcesses]
ORDER BY
[AffectedProcesses] ASC
-->
AffectedProcesses Number of Deadlocks
----------------- -------------------
2 3
3 1
(2 row(s) affected)
You can use comments or also send me an Email if you have access to it. (due to spam bots I will not post it here openly though) Thank you for all sendings! |
Sie können Kommentare verwenden, oder mir eine E-Mail schicken, wenn Sie auf diese Zugriff haben (aufgrund der Spam-Bots werde ich meine Email-Adresse hier jedoch nicht öffentlich posten). Ich bedanke mich für alle Zusendungen! |
I hope you find my tool useful, and good luck with the analysis of your hopefully not too many deadlocks. |
Ich hoffe, Sie können mein Tool gut verwenden und wünsche Ihnen viel Erfolg bei der Analyse Ihrer hoffentlich nicht allzu vielen Deadlocks. |
Andreas
Performance/ Management Data Warehouse Data Collector & AlwaysOn Availability Groups
Sep 16th
Verwaltungs-Data Warehouse Datensammler & AlwaysOn Hochverfügbarkeitsgruppen
(EN) From time to time, and most recently in the context of my PASS Essential „SQL Server Analysis tools & Techniques for Performance und general Monitoring“, the question arises as to whether the MDW operates together with the High Availability technologies Database Mirroring and AlwaysOn Availability Groups, and if so, how so. |
(DE) Hin und wieder, zuletzt im Zusammenhang mit meinem PASS Essential „SQL Server Analysetools & Techniken für Performance und allg. Monitoring“ kommt die Frage auf, ob das MDW mit den Hochverfügbarkeitstechnologien Datenbankspiegelung und AlwaysOn Hochverfügbarkeitsgruppen zusammenspielt, und wenn, wie. |
The short answer is: Yes, it does. The following graph illustrates a possible setup using the latter: |
Die kurze Antwort lautet: Ja. Das folgende Schaubild zeigt ein mögliches Setup unter Verwendung der letzteren Variante: |
The server (0) holding the MDW database is located outside of the high availability nodes. The databases to be monitored are located in the AlwaysOn Availability Groups in the servers 1-3. |
Der Server (0), der die MDW-Datenbank vorhält, liegt außerhalb der Hochverfügbarkeitsknoten. Die zu überwachenden Datenbanken liegen in AlwaysOn Hochverfügbarkeitsgruppen auf den Servern 1-3. |
Part 1: Databases in secondary role If you set up the MDW as standard you will realize that after a failover, the data of the respective databases disappear from the “Disc Usage” reports of the server, while these were previously still present in the primary role. The background to this is that after a failover, the respective databases now are present in a different server in the primary role, and now are no longer readable in the secondary, in the standard setting. In this moment, the System Data Collection Set “Disc Usage”, or the underlying job “collection_set_1_noncached_collect_and_upload” cannot collect data for this database. In contrast on the new primary node these database will now reappear as long as they are active in the primary role there. In principle, this behavior is comprehensible: The Data Collector can no longer find any information on this database and assumes that the latter is no longer relevant – as if it was deleted. One may certainly wish for a possibility of intervention here; however, the MDW is currently not flexible in this regard. The new report “Transaction Performance Analysis Overview” which is enriched through the newly existent “Transaction Performance Collection Set” in SQL Server 2014 also displays data for no longer active databases. Having clarified this background, the possible solution is self-evident: The databases must remain readable. With AlwaysOn High Availability Groups, this is in principal easily done: |
Part 1: Datenbanken in Secondary-Rolle Wenn man nun das MDW standardmäßig einrichtet, wird man feststellen, dass nach einem Failover die Daten der jeweiligen Datenbanken aus den „Disk Usage“-Berichten des Servers verschwinden, wo diese bis zuvor noch in der Primary-Role vorlagen. Hintergrund ist, dass nach einem Failover die jeweiligen Datenbanken nun auf einem anderen Server in der Primary-Role vorliegen, und auf dem nun Secondary, in der Standardeinstellung nicht lesbar sind. In diesem Moment kann das System Data Collection Set „Disk Usage“, bzw. der dahinterliegende Job „collection_set_1_noncached_collect_and_upload“ zu dieser Datenbank keine Daten auslesen. Auf dem neuen Primary-Knoten hingegen werden diese Datenbanken nun neu erscheinen, solange sie dort in der primären Rolle aktiv sind. Prinzipiell ist dieses Verhalten nachvollziehbar: Der Data Collector kann keine Informationen zu dieser Datenbank mehr finden und geht davon aus, dass diese nicht mehr relevant ist – als ob sie gelöscht sei. Sicherlich kann man sich hier eine Eingriffsmöglichkeit wünschen, derzeit ist das MDW aber in dieser Hinsicht nicht flexibel. Wenn nun dieser Hintergrund klar ist, liegt die mögliche Lösung nahe: Die Datenbanken müssen lesbar bleiben. |
However, one needs to be aware of the fact that these databases are now released for all reading access – which should be taken into consideration in respect to application architecture, performance as well as in terms of license. Hence, for the purpose of data collection for performance evaluation alone I CANNOT recommend it. If however the business applications are supposed to maintain reading access to the secondary point anyway, the data collector is covered with this as well. One more advice: The setting “Read-Intent only” unfortunately does not work with the MDW since one cannot manually adapt the Connection String accordingly. |
Jedoch muss man sich hierüber im Klaren sein, dass diese Datenbanken nun für sämtliche Lesezugriffe freigegeben sind, was sowohl hinsichtlich Applikationsarchitektur, Performance als auch Lizenztechnisch genau bedacht werden sollte. Allein zum Zweck der Datensammlung zu Performance-Auswertung kann ich das also NICHT empfehlen. |
Part 2: Configuration of the MDW-Clients Since the databases run on a different node after a failover, the MDW reports must be set up in all servers in which the Availability Group is running. Here, one needs to ensure that access to the central MDW-Server is possible from all servers. To do this (before SQL Server 2014) the SQL Server Agent Account of the client-instance must be included in the mdw_writer role on the MDW-Server (mdw_admin is not necessary) when configuring the MDW through the “Configure Management Data Warehouse Wizard: |
Part 2: Konfiguration der MDW-Clients Da die Datenbanken nach einem Failover auf einem anderen Knoten laufen, müssen die MDW-Berichte auf allen Servern, auf denen die Availability Group läuft, eingerichtet werden. Dabei muss sichergestellt werden, dass von allen Servern auf den zentralen MDW-Server zugegriffen werden kann. Dazu muss (vor SQL Server 2014) bei der Konfiguration des MDW über den „Configure Management Data Warehouse Wizard“ der SQL Server Agent Account der Client-Instanz auf dem MDW-Server in die mdw_writer-Rolle aufgenommen werden (mdw_admin ist nicht notwendig): |
From SQL Server 2014, at the configuration of the data collection in the client, it is possible to provide a SQL Server Agent Proxy of the type “Operating System (CmdExec)” as account for the access to the central MDW-Server: |
Ab SQL Server 2014 kann man bei der Konfiguration der Data Collection auf dem Client einen SQL Server Agent Proxy vom Typ „Operating System (CmdExec)“ als Konto für den Zugriff auf den zentralen MDW-Server hinterlegen: |
In this case, it is of course required to authorize the underlying Windows account in the server, instead of the agent itself, as „mdw-writer“. |
In diesem Fall muss auf dem Server natürlich der dahinterstehende Windows-Account anstelle des Agents selber als „mdw_writer“ berechtigt werden. |
As soon as all clients are authorized accordingly, one can read the data of all SQL Server AG nodes in the central management server. Depending on which server a database is currently present in in the primary role, it will then appear in the according subreport. |
Sobald alle Clients entsprechend berechtigt sind, kann man auf dem zentralen Management Server die Daten aller SQL Server Knoten der AG auslesen. Je nachdem auf welchem Server eine Datenbank gerade in der Primary-Rolle vorliegt, erscheint diese dann in dem entsprechenden Subreport. |
Happy collecting
Andreas
New Extended Events for Tracing in SQL Server 2014
Apr 5th
Neue “Erweiterte Ereignisse” (XEvents) für Tracing in SQL Server 2014
(de) Zunächst gibt es 3 neue Pakete („Packages“), die alle zu der neuen Arbeitsspeicher(„In-Memory“) -OLTP Engine: XTP, aufgeteilt auf 3 DLLs: |
(en) To begin with there are 3 new packages, all belonging to the new In-Memory OLTP Engine: XTP, split up into 3 dlls: |
Package |
description |
Module name |
XtpRuntime |
Extended events for the XTP Runtime |
{InstanceDir}\MSSQL\Binn\ |
XtpEngine |
Extended events for the XTP Engine |
{InstanceDir}\MSSQL\Binn\ |
XtpCompile |
Extended events for the XTP Compile |
{InstanceDir}\MSSQL\Binn\ |
Die Anzahl der Extended Events ist von 625 in SQL Server 2012 (SP1) auf 870 in SQL Server 2014 (RTM) angestiegen. Das sind 245 neue Events! – Reichlich Möglichkeiten, in Internas einzutauchen :-) recovery_incremental_checkpoint in recovery_indirect_checkpoint und hekaton_slow_param_passing in natively_compiled_proc_slow_parameter_passing Ja, richtig gelesen. Es gab bereits im SQL Server 2012 eine Anzahl an Events für die XTP Engine. Es kommt recht häufig vor, dass man Bits von zukünftigen Entwicklungen in der aktuellen Release findet. Nebenbei: SQL Trace wurde nicht im geringsten angefasst. Die Anzahl von SQL Trace Events bleibt mit 180 seit SQL Server 2008 stabil. – Here ist ein Überblick von Extended Events gegen SQL Trace Events in den verschiedenen SQL Server Versionen: Extended Events vs SQL Trace im Vergleich – oder warum SQL Trace & Profiler einfach von gestern sind :-) Weiter zu XEvents: es gibt keine neuen Prädikate, aber eine Anzahl neuer Maps, was natürlich nicht überrascht bei der Menge neuer Events. Und hier ist die Liste der neuen Extended Events: |
The number of Extended Events increased from 625 in SQL Server 2012 (SP1) to 870 in SQL Server 2014 (RTM). That makes 245 new events! – Plenty of opportunities to look into internals :-) recovery_incremental_checkpoint into recovery_indirect_checkpoint and hekaton_slow_param_passing into natively_compiled_proc_slow_parameter_passing Yes, you read correctly. There have already been a couple of events for the new XTP engine within SQL Server 2012. It’s quite common that you can find bits of future developments within the current release. By the way: SQL Trace has not been touched by the slightest bit. The number of SQL Trace events stays at 180 since SQL Server 2008. – here you find an overview of the number of Extended Events vs SQL Trace events in the various SQL Server versions: Comparing Extended Events vs SQL Trace – or why SQL Trace & Profiler are just a thing of the past :-) Continuing with XEvents: There are no new predicates, but a couple more maps, which is of course not surprisingly considering the amount of new events. And here is the list of all new extended events: |
Package name |
Event name | Description |
qds | query_store_async_shutdown_failed | Fired when query store encounters an error during async query store shutdown |
qds | query_store_background_cleanup_task_failed | Fired if the background task for cleanup of QDS stale data is not completed successfully |
qds | query_store_background_task_creation_failed | Fired if the background task for QDS data persistence could not be created |
qds | query_store_background_task_initialization_failed | Fired if the background task for QDS data persistence could not be initialized |
qds | query_store_background_task_persist_failed | Fired if the background task for QDS data persistence is not completed successfully |
qds | query_store_begin_persist_runtime_stat | Fired immediately before current runtime statistics for a query plan is persisted to the database. |
qds | query_store_bloom_filter_false_positive | Fired if the Bloom filter for QDS statement texts gives a false positive result |
qds | query_store_check_consistency_init_failed | Fired when check consistency task fails during initialization |
qds | query_store_database_initialization_failed | Fired if initialization of the Query Store for database has failed. The Query Store will be disabled for this database |
qds | query_store_db_cleanup__finished | Fired if cleanup of QDS stale data finished for particular database. |
qds | query_store_db_cleanup__started | Fired if cleanup of QDS stale data started for particular database. |
qds | query_store_disk_size_check_failed | Fired when a check against Query Store on-disk size limit fails |
qds | query_store_disk_size_info | Fired when a check against QDS on-disk size is performed |
qds | query_store_execution_runtime_info | Fired when runtime information is sent to the query store. |
qds | query_store_execution_runtime_info_discarded | Fired when runtime information sent to the query store is discarded. |
qds | query_store_execution_runtime_info_evicted | Fired when runtime information sent to the query store is evicted. |
qds | query_store_flush_failed | Fired when query store failed to flush dirty data |
qds | query_store_loaded | Fired when query store is loaded |
qds | query_store_notify_dirty_shutdown_on_partition_startup | Fired when previous instance of query store for the partition is shutdown by force in order to allow creation of a new instance |
qds | query_store_notify_force_failure_failed | Fired when query store failed to notify force failure |
qds | query_store_persist_task_init_failed | Fired when persist task fails during initialization |
qds | query_store_plan_forcing_failed | Occurs when forcing of plan from qds fail |
qds | query_store_plan_persistence_failure | Fired if there's a failure to persist plan |
qds | query_store_plan_removal | Fired when plan is removed |
qds | query_store_query_persistence_failure | Fired if there's a failure to persist query |
qds | query_store_read_write_failed | Fired if the read/write to Query Store internal tables failed |
qds | query_store_statement_not_found | Fired in case when statement couldn't be found due to race condition or ambiguous user request. |
qds | query_store_unloaded | Fired when query store is unloaded from memory |
sqlos | nonpreemptive_long_syncio | record long sync io operation in nonpreemptive worker |
sqlos | stuck_dispatcher_callback_executed | Stuck dispatcher callback executed |
sqlos | wait_completed | Occurs when there is a wait completed on a SQLOS controlled resource. Use this event to track wait completion. |
sqlserver | after_natively_compiled_proc_entry_removal_on_drop | Fired after the procedure cache entry is flushed when dropping a natively compiled procedure. |
sqlserver | availability_replica_state | Occurs when the Availability Replica is starting or shutting down. |
sqlserver | before_natively_compiled_proc_entry_removal_on_drop | Fired before the procedure cache entry is flushed when dropping a natively compiled procedure. |
sqlserver | before_redo_lsn_update | Occurs just prior to the update of the EOL LSN |
sqlserver | buffer_pool_eviction_thresholds_recalculated | Lazywriter and/or worker clock has wrapped the BUF array and thresholds are re-calculated. |
sqlserver | buffer_pool_extension_pages_evicted | Page is evicted from the buffer pool extension cache. |
sqlserver | buffer_pool_extension_pages_read | Page is read from the buffer pool extension cache. |
sqlserver | buffer_pool_extension_pages_written | Page or contiguous set of pages evicted into the buffer pool extension cache. |
sqlserver | check_phase_tracing | Occurs when DBCC CHECK enters a new phase of the checking. Use this event to trace the phases of DBCC CHECK process. |
sqlserver | check_thread_message_statistics | Occurs when a phase of DBCC CHECK is finished. Use this event to collect the number of messages a DBCC CHECK thread has sent or received. |
sqlserver | check_thread_page_io_statistics | Occurs when a phase of DBCC CHECK is finished. Use this event to collect the number of logical, physical, and read-ahead IOs a DBCC CHECK thread has performed. |
sqlserver | check_thread_page_latch_statistics | Occurs when a phase of DBCC CHECK is finished. Use This event to collect the number and time of page latch and IO latch waits. |
sqlserver | clustered_columnstore_index_rebuild | Occurs when clustered index on the table was rebuilt. This event is raised three times for ALTER index rebuild operation on CCSI. The event is raised when the operation takes lock on index rebuild resource, when lock is taken on the table and when S lock on the table is upgraded to SCH_M lock to switch indexes in metadata. |
sqlserver | column_store_code_coverage | Code coverage Xevent for columnstore code. |
sqlserver | column_store_index_build_low_memory | Occurs when Storage Engine detects low memory condition and the rowgroup size is reduced. |
sqlserver | column_store_index_build_process_segment | Occurs when a segment is processed |
sqlserver | column_store_index_build_throttle | Shows the statistics of columnstore index build parallelism throttling |
sqlserver | columnstore_tuple_mover_begin_compress | Occurs when column store tuple mover starts compressing a deltastore. |
sqlserver | columnstore_tuple_mover_end_compress | Occurs when column store tuple mover is done compressing a deltastore. |
sqlserver | database_xml_deadlock_report | Produces a deadlock report for a victim, with information scoped to the victim's database. |
sqlserver | db_lock_acquired_from_cache | Occurs when a DB lock is acquired from the XactWorkspace DB lock cache. |
sqlserver | db_lock_released_from_cache | Occurs when a DB lock is released from the XactWorkspace DB lock cache. |
sqlserver | ddl_with_wait_at_low_priority | A DDL statement was executed using the WAIT_AT_LOW_PRIORITY options |
sqlserver | diag_quantum_end | Occurs when the diag is notified of quantum end. |
sqlserver | dyn_throttle_checkpoint | Occurs when checkpointing has been dynamically throttled with a new Checkpoint rate. |
sqlserver | feature_extension | Occurs when received and parses data for a feature in feature extension. |
sqlserver | file_handle_in_use | Fired when a file handle we're trying to delete is in use and we don't expect it to be. The typical response is dumping all the handles in the system via SysInternals HANDLE.EXE |
sqlserver | file_read_enqueued | File read enqueued |
sqlserver | file_read_throttled | File read throttled |
sqlserver | file_write_enqueued | File write enqueued |
sqlserver | file_write_throttled | File write throttled |
sqlserver | ghost_cleanup_task_packet_enqueue | A task packet is enqueued |
sqlserver | ghost_cleanup_task_process_packet | A task packet is dequeued and processed |
sqlserver | ghost_cleanup_task_process_pages_for_db_packet | Purge the pages that contain ghost records found in a database |
sqlserver | ghost_cleanup_task_start | Ghost cleanup task start |
sqlserver | ghost_cleanup_task_suspend | Ghost cleanup task suspend |
sqlserver | hadr_tds_synchronizer_payload_skip | Hadron Tds Listener Synchronizer skipped a listener payload because there were no changes since the previous payload. |
sqlserver | lock_request_priority_state | The priority state of a lock request |
sqlserver | log_cache_write_block | Writing a log block to the log cache |
sqlserver | metadata_ddl_add_column | Occurs when an ALTER TABLE ADD column operation is updating base index. |
sqlserver | metadata_ddl_alter_column | Occurs when an ALTER TABLE ALTER column operation is updating base index. |
sqlserver | metadata_ddl_drop_column | Occurs when an ALTER TABLE DROP column operation is updating base index. |
sqlserver | mixed_extent_activation | Track mixed extent activation and deactivation operations. |
sqlserver | mixed_extent_allocation | Track mixed extent allocation operations |
sqlserver | mixed_extent_deallocation | Track mixed extent deallocation operations. |
sqlserver | mixed_page_allocation | Track mixed page allocation operations |
sqlserver | mixed_page_deallocation | Track mixed page allocation operations |
sqlserver | mixed_page_scan_file | Track the activity of SGAM scans for mixed page allocation |
sqlserver | mixed_page_scan_page | Track the activity of SGAM scans for mixed page allocation |
sqlserver | mixed_page_skipextent | Track the activity of SGAM scans for mixed page allocation |
sqlserver | natively_compiled_proc_execution_started | Fired before a natively compiled procedure is executed. |
sqlserver | natively_compiled_proc_slow_parameter_passing | Occurs when a Hekaton procedure call dispatch goes to slow parameter passing code path |
sqlserver | process_killed_by_abort_blockers | A process is killed by an ABORT = BLOCKERS DDL statement |
sqlserver | query_execution_batch_hash_aggregation_finished | Occurs at the end of batch hash aggregation. |
sqlserver | query_execution_batch_hash_children_reversed | Occurs each time when hash join reverses build and probe side while processing data spilled to disk. |
sqlserver | query_execution_batch_hash_join_spilled | Occurs each time when hash join spills some data to disk in batch processing. |
sqlserver | query_optimizer_estimate_cardinality | Occurs when the query optimizer estimates cardinality on a relational expression. |
sqlserver | query_optimizer_force_both_cardinality_estimation_behaviors | Both traceflags 2312 and 9481 were enabled, attempting to force both old and new cardinality estimation behaviors at the same time. The traceflags were ignored. Disable one or both of the traceflags. |
sqlserver | query_store_failed_to_capture_query | Fired if the Query Store failed to capture query. The Query Store will not track statistics for this query |
sqlserver | query_store_failed_to_load_forced_plan | Fired if the query failed to load forced plan from QDS. Forcing policy will not be applied |
sqlserver | query_store_persist_on_shutdown_failed | Occurs when SQL Server fails to store dirty entries in Query Store on database shutdown. |
sqlserver | recovery_indirect_checkpoint | Chose to enqueue an indirect checkpoint as we are near our recovery target |
sqlserver | remove_database_cache | Remove database cache |
sqlserver | server_max_workers | Occurs when a request enqueue fails because a worker was not guaranteed. |
sqlserver | session_recoverable_state_change | Occurs when the server determines a state change in term of session recovery on a connectionresilency-enabled connection. |
sqlserver | sql_transaction_commit_single_phase | Occurs when a sql transaction is committed with single phase commit. |
sqlserver | xfcb_blob_properties_obtained | Windows Azure Storage blob property is obtained from response header. |
sqlserver | xfcb_failed_request | Failed to complete a request to Windows Azure Storage. |
sqlserver | xfcb_header_obtained | Response header is obtained from request to Windows Azure Storage. |
sqlserver | xfcb_read_complete | Read complete from Windows Azure Storage response. |
sqlserver | xfcb_request_opened | A request is opened to Windows Azure Storage. |
sqlserver | xfcb_send_complete | Request send to Windows Azure Storage is complete. |
sqlserver | xfcb_write_complete | Request send to Windows Azure Storage is complete. |
sqlserver | xtp_create_procedure | Occurs at start of XTP procedure creation. |
sqlserver | xtp_create_table | Occurs at start of XTP table creation. |
sqlserver | xtp_db_page_allocation_allowed | Indicates that page allocations for the database are allowed. |
sqlserver | xtp_db_page_allocation_disallowed | Indicates that page allocations for the database are disallowed due to memory pressure. |
sqlserver | xtp_deploy_done | Occurs at completion of XTP object deployment. |
sqlserver | xtp_matgen | Occurs at start of MAT generation. |
sqlserver | xtp_offline_checkpoint_scan_start | Fired by XTP offline checkpoint when the checkpoint thread begins. |
sqlserver | xtp_offline_checkpoint_scan_stop | Fired by XTP offline checkpoint when the checkpoint thread stops. |
sqlserver | xtp_recover_done | Occurs at completion of log recovery of XTP table. |
sqlserver | xtp_recover_table | Occurs at start of log recovery of XTP table. |
sqlserver | xtp_storage_table_create | Occurs at just before the XTP storage table is created. |
ucs | ucs_connection_rejected_by_proxy_whitelist | After a connection attempt to the UCS proxy endpoint is rejected by whitelist check |
ucs | ucs_proxy_connect_next_hop | UCS proxy next hop connection |
ucs | ucs_proxy_receive_proxy_connect_message | UCS proxy receive proxy connect message |
ucs | ucs_proxy_route_add | UCS proxy route added |
ucs | ucs_proxy_route_disable | UCS proxy route disabled |
ucs | ucs_proxy_route_refresh | UCS proxy route refreshed |
ucs | ucs_proxy_send_proxy_connect_message | UCS proxy send proxy connect message |
XtpCompile | cgen | Occurs at start of C code generation. |
XtpCompile | invoke_cl | Occurs prior to the invocation of the C compiler. |
XtpCompile | mat_export | Occurs at start of MAT export. |
XtpCompile | pitgen_procs | Occurs at start of PIT generation for procedures. |
XtpCompile | pitgen_tables | Occurs at start of PIT generation for tables. |
XtpEngine | after_changestatetx_event | Fires after transaction changes state. |
XtpEngine | alloctx_event | |
XtpEngine | attempt_committx_event | Is raised when a transaction is asked to commit. |
XtpEngine | before_changestatetx_event | Fires before transaction changes state. |
XtpEngine | dependency_acquiredtx_event | Raised after transaction takes a dependency on another transaction. |
XtpEngine | endts_acquiredtx_event | Fires after transaction acquires an end timestamp. |
XtpEngine | gc_base_generation_evaluation | Indicates that an evaluation of updating the GC base generation has been made. |
XtpEngine | gc_base_generation_updated | Indicates that the oldest active transaction hint used for calculating the GC base generation has been updated. |
XtpEngine | gc_cycle_completed | Indicates that a GC notification has been enqueued. |
XtpEngine | gc_notification | Indicates that GC is processing a notification. |
XtpEngine | redo_single_hk_record | Redo on a HK log record |
XtpEngine | trace_add_delta_filter_begin | Adding delta filter. |
XtpEngine | trace_add_duplicate_delta_filter | Adding duplicate delta filte. |
XtpEngine | trace_adding_tx_filter | Adding tx filter. |
XtpEngine | trace_begin_close_ckpt_processing | Beginning close checkpoint processing: checkpoint flush fence = $2 ($3 state). |
XtpEngine | trace_cfd_entry_deleted | Cleaned up data CFD entry for file $2 $3. |
XtpEngine | trace_cfd_rows_created | Successfully created CFD rows. |
XtpEngine | trace_cfdtable_dump | CfdTable dump |
XtpEngine | trace_cfp_handles_opened | Opened CFP handles for {data: $2 delta: $3} in section $4. |
XtpEngine | trace_cfp_removed_from_freelist | Removed CFP from freelist for destination file. |
XtpEngine | trace_checkpoint_file_flush | Indicating that a file has started a flush operation. |
XtpEngine | trace_checkpoint_load_begin | Starting Checkpoint load with checkpoint timestamp=$2. |
XtpEngine | trace_checkpoint_write_io | Logs an event indicating that an IO has started (when event_type == IoStarted) or that an IO has completed (when event_type == IoComplete). |
XtpEngine | trace_checksum_validation_succeeded | Checksum validation for Hekaton checkpoint file during backup succeeded. |
XtpEngine | trace_ckpt_close_begin | Closing checkpoint in active/passive state. |
XtpEngine | trace_ckpt_close_signaled | Signaling checkpoint close. The checkpoint policy object is flagged to close the current checkpoint |
XtpEngine | trace_ckpt_closed | Checkpoint closed |
XtpEngine | trace_ckpt_load_thread_stats | Statistics of the checkpoint load thread. |
XtpEngine | trace_ckpt_mrt_dump | Mrt dump |
XtpEngine | trace_ckpt_serialization_state_dump | Checkpoint serialization state dump. |
XtpEngine | trace_ckpt_stream_io_stats | Io statistics for this particular stream. |
XtpEngine | trace_close_file_log_rec_created | Successfully created close file log record. |
XtpEngine | trace_data_file_pages_txs | This event gives the number of pages and the transaction range for a data file. |
XtpEngine | trace_data_from_page_deserialized | Deserialized data from page $2 in file $4. |
XtpEngine | trace_data_row_modified | Data row modified/deleted. |
XtpEngine | trace_delete_cfd_rows_begin | Matching MRT not found for the Data CFD. Deleting corresponding CFD Rows. |
XtpEngine | trace_delete_DeletedObjectTable_row_begin | Deleting row from DeletedObjectTable in the context of transaction: $5. |
XtpEngine | trace_delete_merged_source_file | Deleting merged source $7 file $5:$6. DeleteLSN = $2:$3:$4. |
XtpEngine | trace_delete_mrtrow_for_target_begin | Deleting mrt row for target. |
XtpEngine | trace_delta_entry_skipped | Skipping delta {$3, $4, $5} because it is beyond the snapshot checkpoint: $6. |
XtpEngine | trace_delta_file_future_count_rows_pages_update | Updating future count pages and count rows for delta file. |
XtpEngine | trace_delta_file_lastgood_count_rows_pages_update | Updating lastgood count pages and count rows for delta file. |
XtpEngine | trace_delta_file_pages_txs | This event gives the number of pages and the transaction range while reading a delta file for a particular operations. |
XtpEngine | trace_delta_file_processed | Completed processing of Delta file. |
XtpEngine | trace_delta_file_set_clean_or_dirty | Delta file set clean or dirty. |
XtpEngine | trace_delta_filter_removed | Removing delta filter from DeltaFilterMap due to transaction filter. |
XtpEngine | trace_delta_watermark_removed | Removed delta watermark. |
XtpEngine | trace_delta_watermark_updated | Updated delta watermark row for FileId: $4. |
XtpEngine | trace_deserialize_chained_records | Info pertaining to the deserialized content of the mrtRecords chain. |
XtpEngine | trace_deserialize_data_from_page_begin | Deserializing data from page $2 in file $4. |
XtpEngine | trace_deserialize_source_delta_file_begin | Deserializing source delta file $4: page range [$2, $3). |
XtpEngine | trace_dump_cfd_row | Dump cfd row |
XtpEngine | trace_dump_mrt_row | Merge Request |
XtpEngine | trace_file_close_skipped | Skipping file close for $2 as the cfp had been merged, It could not be found in the Storage array. |
XtpEngine | trace_file_skipped | Skipping file $4 due to reason $5. |
XtpEngine | trace_file_skipped_with_tx_range | Skipping file $5 with TX range { $3 -> $4 } due to MaxTxId == InvalidTxId. |
XtpEngine | trace_first_page_header_flushed | Flushed first page header for file. |
XtpEngine | trace_flush_io_operation_for_delta_file | This trace is trying to catch pages issued for flush that didn't make it to the file, or were lacking the correct DeltaWatermark Timestamp. |
XtpEngine | trace_flush_skipped_for_closed_data_file | Skipping flush for data file ('$2') as it is closed. |
XtpEngine | trace_future_count_pages_update | Updating future count pages. |
XtpEngine | trace_initial_merge | Doing initial merge for destination $3 at CheckpointTs: $2. |
XtpEngine | trace_insert_DeletedObjectTable_row_begin | Inserting row to DeletedObjectTable in the context of transaction: $5. |
XtpEngine | trace_intialize_worker_file_begin | Processing file close. |
XtpEngine | trace_mark_data_file_closed | Marking the active data file as closed to prevent the offline worker fom flushing them. |
XtpEngine | trace_merge_cancelled | Merge was cancelled for destination $2. |
XtpEngine | trace_merge_complete_log_rec_ignored | MergeCompleteLogRecord ignored for destination file $3. |
XtpEngine | trace_merge_complete_log_rec_processed | Successfully processed MergeCompleteLogRecord. |
XtpEngine | trace_mrt_dump | Mrt dump. |
XtpEngine | trace_mrt_row_inplace_update | Mrt row updated in place. |
XtpEngine | trace_mrtrow_not_found | Could not find row with Destination ID: $2 in MRT table. |
XtpEngine | trace_new_delta_watermark_inserted | Inserted new delta watermark row for FileId $3. |
XtpEngine | trace_offline_process_stale_merge_item | Offline processing of stale merge item {$2, $3} Destination $4. |
XtpEngine | trace_pending_tx | Pending Transaction |
XtpEngine | trace_populate_storage_array | Populating the storage array. |
XtpEngine | trace_post_writefile | This event is fired after issuing IO on the proxy. |
XtpEngine | trace_process_file_close_begin | Processing file close. |
XtpEngine | trace_process_merge_complete_log_record | Process merge complete log record. |
XtpEngine | trace_process_merge_request | Process Merge Request |
XtpEngine | trace_process_source_file | Processing source $4 file $2. |
XtpEngine | trace_process_target_file | Processing target $4 file $2. |
XtpEngine | trace_queue_merge_work_item | Queue work item: {$2, $3} $5 - $6 with CheckpointTs as $4. |
XtpEngine | trace_recovered_existing_file | Recovered exising files files allocated during forward processing that are not already part of the CCB. |
XtpEngine | trace_resyncing_state_of_file | Resyncing state of file $4 ('$5') post recovery: $2 pages, $3 rows. |
XtpEngine | trace_serialization_object_cleaned | Cleaned serialization object. |
XtpEngine | trace_serialize_abort_transaction_begin | Serializing the abort transaction filter to the delta cache corresponding to its own transaction range. |
XtpEngine | trace_serialize_tx_begin | Serialize TS begin for completing the checkpoint in active state. |
XtpEngine | trace_serialize_tx_end | Serialize TS end for completing the checkpoint. |
XtpEngine | trace_serialized_delta_rows_to_file | Serialized delta rows to file. |
XtpEngine | trace_source_file_load_stats | Load stats for source files in merge. |
XtpEngine | trace_storage_refreshed_for_file | Storage refreshed for file. |
XtpEngine | trace_uninitialize_file | uninitializing_file |
XtpEngine | trace_wrote_delta_watermark | Wrote delta watermark for file: $3, checkpointTs: $2. |
XtpEngine | waiting_for_dependenciestx_event | Raised when we have explicitly waited for dependencies to clear. |
XtpEngine | xtp_before_create_log_record | Fired before submitting a request to the host to serialize a non-transactional log record. |
XtpEngine | xtp_checkpoint_file_flush | Indicates the point at which a given file has been flushed to disk. |
XtpEngine | xtp_checkpoint_file_flush_complete | Indicates the point at which all in-flight buffers have been flushed to disk. |
XtpEngine | xtp_checkpoint_file_read | Indicates reading of a file in XTP checkpoint recovery. |
XtpEngine | xtp_checkpoint_worker_active_log | Indicates that the checkpoint worker has started processing the log records in the active portion of the log. |
XtpEngine | xtp_checkpoint_worker_started | Indicates that the checkpoint worker has started processing the log records. |
XtpEngine | xtp_checkpoint_worker_stopped | Indicates that the checkpoint worker has started processing the log records. |
XtpEngine | xtp_checkpoint_write_io | Indicates that the checkpointing subsystem has issued or completed a write IO. |
XtpEngine | xtp_complete_checkpoint | Indicates the begin and end of complete checkpoint processing. |
XtpEngine | xtp_create_log_record | Fires when the XTP engine creates a log record. |
XtpEngine | xtp_delta_marked_dirty | Indicates that a delete/update was serialized and marked a delta file dirty |
XtpEngine | xtp_merge_complete_log_record | Indicates merge complete log record is posted to the log. |
XtpEngine | xtp_merge_log_complete_log_record | Indicates merge log record was interpreted by the offline worker. |
XtpEngine | xtp_merge_process_log_record | Indicates merge log record was interpreted by the offline worker. |
XtpEngine | xtp_merge_request_log_record | Indicates merge request log record is posted to the log. |
XtpEngine | xtp_merge_request_started | Indicates merge request has been picked up by the merge worker thread. |
XtpEngine | xtp_root_deserialized | Indicates that the load of a checkpoint root is complete. |
XtpEngine | xtp_root_serialized | Indicates that the write of the checkpoint root is complete. |
XtpRuntime | bind_md | Occurs prior to binding metadata for a memory optimized table. |
XtpRuntime | bind_tables | Occurs prior to binding tables for a natively compiled procedure. |
XtpRuntime | create_table | Occurs prior to creating memory optimized table. |
XtpRuntime | deserialize_md | Occurs prior to deserializing metadata. |
XtpRuntime | load_dll | Occurs prior to loading the generated DLL. |
XtpRuntime | recover_done | Occurs at completion of checkpoint recovery of a memory optimized table. |
XtpRuntime | recover_table | Occurs at start of checkpoint recovery of a memory optimized table. |
XtpRuntime | serialize_md | Occurs prior to serializing metadata. |
XtpRuntime | unload_dll | Occurs prior to unloading the generated DLL. |
Happy tracing,
Andreas
Comparing Extended Events vs SQL Trace – or why SQL Trace & Profiler are just a thing of the past :-)
Nov 21st
Extended Events vs SQL Trace im Vergleich – oder warum SQL Trace & Profiler einfach von gestern sind :-)
(de) Für alle, die noch mit dem alten Werkzeug SQL Server Profiler (Profiler ist das Frontend für SQL Trace, gestartet mit sp_trace_create) arbeiten, und sich noch nicht für die neue Technologie entscheiden konnten, hier eine kleine Entscheidungshilfe. Was Extended Events (XEvents) besser als SQLTrace machen: |
(en) For those of you who are still working with the old tool SQL Server Profiler (Profiler is the frontend for SQL Trace, started with sp_trace_create) and have not quite been able to decide for the new technology, here is some decision guidance. What Extended Events (XEvents) do better than SQL Trace: |
|
|
*1
Extended Events vs. SQL trace Events per Version
*2
As an example: For Service Broker there are:
15 Events in SQLTrace vs. 44 Events in XEvents (SQL Server 2012 SP1)
Ich hoffe, das hilft dem einen oder anderen, die alte Gewohnheit abzulegen, und die kleine Lernphase in Kauf zu nehmen. Eine Liste aller Extended Events in SQL Server 2012 SP1 samt Ihrem Gegenstück in SQL Trace, welche man für Migrationszwecke (SQLTrace -> XEvent Trace) verwenden kann findet sich in dieser Seite. (Aufgrund einer Größenbeschränkung passte sie nicht mehr hier hinein.) |
I hope this helps some of you to unlearn the old habit and accept the little learning phase.
A list of all Extended Events in SQL Server 2012 SP1 including its counter piece in SQL Trace that can be used for migration purposes (SQLTrace-> XEvent Trace) is available on this page. (Due to size restriction, it didn’t fit in here anymore.) |
> Mapping Extended Events with sys.trace_xe_event_map to SQL Trace <
Happy better Tracing
Andreas
PS: Für 2014 befindet sich mit den SQL Server Master-Classes zum Thema „Tracing mit Extended Events“ auch die nächste Runde der Trainings zu diesem Thema in der Planung :-) |
P.S. For 2014, the next round of training in this topic is being developed in conjunction with SQL Server Master-Classes on the topic “Tracing with Extended Events” :-) |