Tag: "sql trace _ profiler"
How to import Extended Events session file target and parse deadlock-graph / Import von Extended Events Ereignisdateien und Parsen von Deadlock-Graphen
Mar 31st
(DE) |
(EN) |
Deadlocks, die man hier erläutert finden kann, treten zwischen mindestens 2 Arbeits-Threads auf, oft aber sind mehr als 2 beteiligt. |
Deadlock, explained here, occur between at least 2 threads but often more than 2 are involved. |
Um Deadlocks zu Tracen gab es vor SQL Server 2008 nur die folgenden Möglichkeiten: 1. Eine SQL Trace mit Profiler oder auch ohne GUI mit Protokollierung in trc-Datei oder live in der GUI 2. Trace Flags 1204, 1205 oder 1222, welche in das Errorlog des SQL Servers protokollieren – „wunderbar“ besonders auf „ereignisreichen“ Servern, und denkbar schwierig für die Analyse. Manch einer mag sich noch an diese grausigen Einträge im Error-Log erinnern: |
For tracing deadlocks before SQL Server 2008 there were merely the following ways: 1. A SQL Trace with Profiler or without GUI with logging in a trace-file or live inside the GUI 2. Trace Flags 1204, 1205 or 1222, which log into SQL Server’s errorlog – „great“ especially on “eventful” servers. Some may remember those horrible entries in the error log: |
3. Event Notifications, welche die Daten per Service Broker Queue in XML-Format zur Verfügung stellen. 4. Ein WMI-Alert, welcher die Daten ebenfalls per XML bereitstellt und mit einem Auftrag oder einer Benachrichtigung gekoppelt diesen protokollieren bzw. darüber informieren kann. |
3. Event Notifications which provide the data via Service Broker Queue in XML-format. 4. A WMI-Alert which also provides the data via XML und tied to a job can also log respectively inform about it. |
All diese Ansätze, so unterschiedliche Vorteile sie auch haben mögen haben eines gemeinsam: Sie müssen erst eingerichtet werden. Im Zweifelsfalle also nachdem die ersten Deadlocks aufgetreten sind. Seit SQL Server 2008 jedoch werden sämtliche Deadlocks von Hause aus protokolliert. Das geschieht durch die automatisch laufende system_health Extended Event session. Diese speichert neben Deadlocks noch weitere wichtige Server-Ereignis- und Fehler-Informationen ab. Hier finden sich Details dazu. Seit SQL Server 2012 protokolliert die system_health session neben dem Memory-Ziel „Ring_Buffer“ auch in die Ereignisdatei, womit Deadlocks und weitere Fehler auch nach Serverneustart noch eine ganze Weile (bis zu mehreren Tagen) zur Verfügung stehen – bis der Rollover der 5 Dateien stattgefunden hat. Für SQL Server 2008 Instanzen empfehle ich, das Datei-Ziel hinzuzufügen. |
All those approaches, as different as their respective advantages may, have ine thing in common: They have to be set up. In case of doubt after the first deadlocks occurred. But since SQL Server 2008 all deadlocks are being logged by default. This is done by the automatically running system_health Extended Event session. This, alongside deadlocks logs further important server events and error-information. Here you find details on it. Since SQL Server 2012 the system_health session apart from the memory-target “ring_buffer” also logs into an event-file, so that deadlocks and other error stay available even after a server restart for quite a while (up to a few days) – until the rollover of the 5 files has occurred. For SQL Server 2008 instances I recommend to add the file-target.
|
Wie kann man diese Daten auslesen? |
How can you read-out that data? |
SELECT
target_data
FROM
sys.dm_xe_session_targets AS dm_xe_session_targets
INNER JOIN sys.dm_xe_sessions AS dm_xe_sessions
ON dm_xe_session_targets.event_session_address = dm_xe_sessions.address
WHERE
dm_xe_sessions.name = 'system_health'
AND target_name = 'ring_buffer'
„Target_data“ ist XML-text gespeichert als unicode-text-Blob, welches man bei entsprechender Konvertierung zu XML sogar in SSMS direkt öffnen kann. Das sieht dann vom Code her so aus: |
„Target_data“ is XML-text stored as unicode-text-blob which can even be opened direct in SSMS after converting to XML. |
Wenn man einen solchen Deadlock-Report in XML-Format in SSMS anklickt, bekommt man ihn in einem neuen Fenster als XML-Dokument angezeigt: |
When klicking at one of those Deadlock-Report reports in XML-format in SSMS it will be opened in a new window as XML-document: |
- im Bild habe ich zur Übersicht einige Knoten zugeklappt. Wichtig hier sind: die victim-list, welche bei einem Multi-Victim-Deadlock dann je Victim eine Prozess-Id enthält, die man wiederum in der process-list näher untersuchen kann. Im executionStack findet man dann denn ausgeführten SQL-Befehl und ggfl. sogar direkt den Prozedurnamen. Darunter folgt in der resource-list eine Liste der beteiligten (gesperrten) Ressourcen. Sämtliche Nodes sind mit Details wie der Zeilennummer, Objektname, Lock-Modus, Wartezeit, SPID und natürlich Zeitstempeln gespickt. |
- in the picture I collapsed some nodes for brevity. Most important here are: the victim-list, which for a multi-victim-deadlock contains one process-id per victim, which again one can investigate more deeply in the process-list. In the executionStack one can find the executes SQL-command and if applicable even the procedure name. Below in the resource-list follows a list of all involved (blocked) resources. All nodes are peppered with details such as row number, objectname, lock-mode, waiting time, spid and of course timestamps. |
Wer hier direkt weiter ansetzen möchte, kann das XML natürlich weiter auseinandernehmen und einzelne Knoten versuchen relational in Spaltenform darzustellen. |
Who wants to start from here can of course take apart the XML and try to break individual nodes relationally into columns. |
CAST(XEventData.XEvent.query('(data/value/deadlock)[1]') AS xml) AS DeadlockGraph
Von dort aus kann man leicht die einzelnen Knoten parsen. |
From there one can parse the individual nodes easily. |
XMLCol.value('(/deadlock/process-list/process)[1]/@isolationlevel', 'varchar(100)')
Auf die Weise kann man mit fast allen Attributen und Elementen verfahren. |
In this manner one can proceed with almost all attributes and elements. |
Das zweite Problem tritt dann auf, wenn man Multi-Victim-Deadlocks vor sich hat. Potentiell kann es ja eine Unmenge an gesperrten und gekillten Victim-Prozessen geben, so dass man da kaum ein immer ausreichendes Schema aufbauen kann. Hier ein Beispiel eines Multi-Victim-Deadlocks, dargestellt mit der XEvents GUI (Profiler beherrschte das, wie so vieles, nicht) |
The second problem arises once you are dealing with a multi-victim-deadlock. Potentially there can be a vast number of locked and killed victim-processes, so that one can hardly prepare an ever sufficient schema. Here an example of a multi-victim-deadlock, displayed with the XEvents GUI (Profiler could not handle that as many other things) |
Letzten Monat habe ich meinen SQL Deadlock-Collector & Parser bei codeplex frei downloadbar veröffentlicht. - Hier zum Artikel mit ausführlicher Beschreibung: freier Deadlock-Sammler & -Parser basierend auf Erweiterten Ereignissen Dort habe ich mich so entschieden, immer die beiden ersten Prozesse gegenüberzustellen, was bei den meisten Deadlocks schon sehr weiterhilft. Das sieht dann so aus: |
Last month I published my SQL Deadlock-Collector & Parser freely downloadable at codeplex. - here the article with detailed explanaition: free Deadlock-Collector & -Parser based on Extended Events There I decided always contrast the first two processes , which helps for most deadlocks a lot already. This is how it looks like: |
Wie kann man nun die ja ebenfalls von er system_health-session geschriebenen Event-Dateien einlesen? Dafür gibt es die Systemfunktion sys.fn_xe_file_target_read_file. So kann das dann aussehen: |
How is it possible to read-in the eventfiles which are written to as well by the system_health-session? For that the system-function sys.fn_xe_file_target_read_file is a provided. This is how that can look like: |
Wer bereits den Deadlock-Collector zum Parsen und Speichern seiner Deadlocks verwendet, kann damit ebenfalls manuell Deadlocks von X-beliebigen Systemen in die SQL_Analysis_Data-Datenbank importieren. Dieser ist standardmäßig auf die Verwendung des Ring_Buffers geschaltet. |
Who is already using the Deadlock-Collector to parse and save ones deadlocks can also use it to import deadlocks from any system at will into the SQL_Analysis_Data-database. For that simply use the procedure Locking.ins_DeadLock in the database SQL_Analysis_Code with the parameter @XESource. It is set to use the ring_buffer by default. |
USE SQL_Analysis_Code;
DECLARE @DeadlocksCollected int;
EXECUTE Locking.ins_DeadLock
@XESource = 'D:\LogFiles\SQLServer\SQL2012Dev\system_health_*.xel'
, @DeadlocksCollected = @DeadlocksCollected OUTPUT;
SELECT @DeadlocksCollected AS [Number of new Deadlocks collected:];
Anschließend kann man die Deadlocks wie hier beschrieben über Sichten oder Ad-Hoc T-SQL bequem halbwegs in relationaler Struktur analysieren. |
Afterwards one can comfortably analyse the deadlocks via views or ad-hoc T-SQL in a half-way relational structure as wie described here. |
Hinweis: Es ist möglich, dass Deadlocks aufreten, die nicht in den system_health_session-Speicherzielen, also weder in Ring_Buffer noch in der Eventdate zu finden sind. Das geschieht, wenn der Deadlock-Graph zu groß für den Puffer-Speicher ist.Um sicherzustellen, dass auch größere Deadlocks gesammelt werden können, empfehle ich, den Puffer-Speicher der system_health-session zu erhöhen. |
Note: It is possible that deadlocks occur which do not appear in the system_health-session targets. Neither in the ring_buffer not in the filetarget. In order to make sure that larger deadlock graphs are also captured, I recommend to increase the maximal work memory of the system_health session. |
Und wer jetzt immer noch auf Profiler & SQL Trace setzt (seit SQL Server 2012 aus gutem Grunde deprecated), hier noch ein Tipp: |
And who now is still using Profiler & SQL trace (deprecated since SQL Server 2012 for good reasons), here a tip: |
Das sieht eher wie ein „Multi-Winner-Deadlock“ aus ;-) Und so stellt sich dieser in der XEvents GUI dar: |
Looks more like a „multi-winner-deadlock” ;-) And this is how the XEvents GUI displays it: |
HappyXEvent Parsing
Andreas
New Extended Events for Tracing in SQL Server 2014
Apr 5th
Neue “Erweiterte Ereignisse” (XEvents) für Tracing in SQL Server 2014
(de) Zunächst gibt es 3 neue Pakete („Packages“), die alle zu der neuen Arbeitsspeicher(„In-Memory“) -OLTP Engine: XTP, aufgeteilt auf 3 DLLs: |
(en) To begin with there are 3 new packages, all belonging to the new In-Memory OLTP Engine: XTP, split up into 3 dlls: |
Package |
description |
Module name |
XtpRuntime |
Extended events for the XTP Runtime |
{InstanceDir}\MSSQL\Binn\ |
XtpEngine |
Extended events for the XTP Engine |
{InstanceDir}\MSSQL\Binn\ |
XtpCompile |
Extended events for the XTP Compile |
{InstanceDir}\MSSQL\Binn\ |
Die Anzahl der Extended Events ist von 625 in SQL Server 2012 (SP1) auf 870 in SQL Server 2014 (RTM) angestiegen. Das sind 245 neue Events! – Reichlich Möglichkeiten, in Internas einzutauchen :-) recovery_incremental_checkpoint in recovery_indirect_checkpoint und hekaton_slow_param_passing in natively_compiled_proc_slow_parameter_passing Ja, richtig gelesen. Es gab bereits im SQL Server 2012 eine Anzahl an Events für die XTP Engine. Es kommt recht häufig vor, dass man Bits von zukünftigen Entwicklungen in der aktuellen Release findet. Nebenbei: SQL Trace wurde nicht im geringsten angefasst. Die Anzahl von SQL Trace Events bleibt mit 180 seit SQL Server 2008 stabil. – Here ist ein Überblick von Extended Events gegen SQL Trace Events in den verschiedenen SQL Server Versionen: Extended Events vs SQL Trace im Vergleich – oder warum SQL Trace & Profiler einfach von gestern sind :-) Weiter zu XEvents: es gibt keine neuen Prädikate, aber eine Anzahl neuer Maps, was natürlich nicht überrascht bei der Menge neuer Events. Und hier ist die Liste der neuen Extended Events: |
The number of Extended Events increased from 625 in SQL Server 2012 (SP1) to 870 in SQL Server 2014 (RTM). That makes 245 new events! – Plenty of opportunities to look into internals :-) recovery_incremental_checkpoint into recovery_indirect_checkpoint and hekaton_slow_param_passing into natively_compiled_proc_slow_parameter_passing Yes, you read correctly. There have already been a couple of events for the new XTP engine within SQL Server 2012. It’s quite common that you can find bits of future developments within the current release. By the way: SQL Trace has not been touched by the slightest bit. The number of SQL Trace events stays at 180 since SQL Server 2008. – here you find an overview of the number of Extended Events vs SQL Trace events in the various SQL Server versions: Comparing Extended Events vs SQL Trace – or why SQL Trace & Profiler are just a thing of the past :-) Continuing with XEvents: There are no new predicates, but a couple more maps, which is of course not surprisingly considering the amount of new events. And here is the list of all new extended events: |
Package name |
Event name | Description |
qds | query_store_async_shutdown_failed | Fired when query store encounters an error during async query store shutdown |
qds | query_store_background_cleanup_task_failed | Fired if the background task for cleanup of QDS stale data is not completed successfully |
qds | query_store_background_task_creation_failed | Fired if the background task for QDS data persistence could not be created |
qds | query_store_background_task_initialization_failed | Fired if the background task for QDS data persistence could not be initialized |
qds | query_store_background_task_persist_failed | Fired if the background task for QDS data persistence is not completed successfully |
qds | query_store_begin_persist_runtime_stat | Fired immediately before current runtime statistics for a query plan is persisted to the database. |
qds | query_store_bloom_filter_false_positive | Fired if the Bloom filter for QDS statement texts gives a false positive result |
qds | query_store_check_consistency_init_failed | Fired when check consistency task fails during initialization |
qds | query_store_database_initialization_failed | Fired if initialization of the Query Store for database has failed. The Query Store will be disabled for this database |
qds | query_store_db_cleanup__finished | Fired if cleanup of QDS stale data finished for particular database. |
qds | query_store_db_cleanup__started | Fired if cleanup of QDS stale data started for particular database. |
qds | query_store_disk_size_check_failed | Fired when a check against Query Store on-disk size limit fails |
qds | query_store_disk_size_info | Fired when a check against QDS on-disk size is performed |
qds | query_store_execution_runtime_info | Fired when runtime information is sent to the query store. |
qds | query_store_execution_runtime_info_discarded | Fired when runtime information sent to the query store is discarded. |
qds | query_store_execution_runtime_info_evicted | Fired when runtime information sent to the query store is evicted. |
qds | query_store_flush_failed | Fired when query store failed to flush dirty data |
qds | query_store_loaded | Fired when query store is loaded |
qds | query_store_notify_dirty_shutdown_on_partition_startup | Fired when previous instance of query store for the partition is shutdown by force in order to allow creation of a new instance |
qds | query_store_notify_force_failure_failed | Fired when query store failed to notify force failure |
qds | query_store_persist_task_init_failed | Fired when persist task fails during initialization |
qds | query_store_plan_forcing_failed | Occurs when forcing of plan from qds fail |
qds | query_store_plan_persistence_failure | Fired if there's a failure to persist plan |
qds | query_store_plan_removal | Fired when plan is removed |
qds | query_store_query_persistence_failure | Fired if there's a failure to persist query |
qds | query_store_read_write_failed | Fired if the read/write to Query Store internal tables failed |
qds | query_store_statement_not_found | Fired in case when statement couldn't be found due to race condition or ambiguous user request. |
qds | query_store_unloaded | Fired when query store is unloaded from memory |
sqlos | nonpreemptive_long_syncio | record long sync io operation in nonpreemptive worker |
sqlos | stuck_dispatcher_callback_executed | Stuck dispatcher callback executed |
sqlos | wait_completed | Occurs when there is a wait completed on a SQLOS controlled resource. Use this event to track wait completion. |
sqlserver | after_natively_compiled_proc_entry_removal_on_drop | Fired after the procedure cache entry is flushed when dropping a natively compiled procedure. |
sqlserver | availability_replica_state | Occurs when the Availability Replica is starting or shutting down. |
sqlserver | before_natively_compiled_proc_entry_removal_on_drop | Fired before the procedure cache entry is flushed when dropping a natively compiled procedure. |
sqlserver | before_redo_lsn_update | Occurs just prior to the update of the EOL LSN |
sqlserver | buffer_pool_eviction_thresholds_recalculated | Lazywriter and/or worker clock has wrapped the BUF array and thresholds are re-calculated. |
sqlserver | buffer_pool_extension_pages_evicted | Page is evicted from the buffer pool extension cache. |
sqlserver | buffer_pool_extension_pages_read | Page is read from the buffer pool extension cache. |
sqlserver | buffer_pool_extension_pages_written | Page or contiguous set of pages evicted into the buffer pool extension cache. |
sqlserver | check_phase_tracing | Occurs when DBCC CHECK enters a new phase of the checking. Use this event to trace the phases of DBCC CHECK process. |
sqlserver | check_thread_message_statistics | Occurs when a phase of DBCC CHECK is finished. Use this event to collect the number of messages a DBCC CHECK thread has sent or received. |
sqlserver | check_thread_page_io_statistics | Occurs when a phase of DBCC CHECK is finished. Use this event to collect the number of logical, physical, and read-ahead IOs a DBCC CHECK thread has performed. |
sqlserver | check_thread_page_latch_statistics | Occurs when a phase of DBCC CHECK is finished. Use This event to collect the number and time of page latch and IO latch waits. |
sqlserver | clustered_columnstore_index_rebuild | Occurs when clustered index on the table was rebuilt. This event is raised three times for ALTER index rebuild operation on CCSI. The event is raised when the operation takes lock on index rebuild resource, when lock is taken on the table and when S lock on the table is upgraded to SCH_M lock to switch indexes in metadata. |
sqlserver | column_store_code_coverage | Code coverage Xevent for columnstore code. |
sqlserver | column_store_index_build_low_memory | Occurs when Storage Engine detects low memory condition and the rowgroup size is reduced. |
sqlserver | column_store_index_build_process_segment | Occurs when a segment is processed |
sqlserver | column_store_index_build_throttle | Shows the statistics of columnstore index build parallelism throttling |
sqlserver | columnstore_tuple_mover_begin_compress | Occurs when column store tuple mover starts compressing a deltastore. |
sqlserver | columnstore_tuple_mover_end_compress | Occurs when column store tuple mover is done compressing a deltastore. |
sqlserver | database_xml_deadlock_report | Produces a deadlock report for a victim, with information scoped to the victim's database. |
sqlserver | db_lock_acquired_from_cache | Occurs when a DB lock is acquired from the XactWorkspace DB lock cache. |
sqlserver | db_lock_released_from_cache | Occurs when a DB lock is released from the XactWorkspace DB lock cache. |
sqlserver | ddl_with_wait_at_low_priority | A DDL statement was executed using the WAIT_AT_LOW_PRIORITY options |
sqlserver | diag_quantum_end | Occurs when the diag is notified of quantum end. |
sqlserver | dyn_throttle_checkpoint | Occurs when checkpointing has been dynamically throttled with a new Checkpoint rate. |
sqlserver | feature_extension | Occurs when received and parses data for a feature in feature extension. |
sqlserver | file_handle_in_use | Fired when a file handle we're trying to delete is in use and we don't expect it to be. The typical response is dumping all the handles in the system via SysInternals HANDLE.EXE |
sqlserver | file_read_enqueued | File read enqueued |
sqlserver | file_read_throttled | File read throttled |
sqlserver | file_write_enqueued | File write enqueued |
sqlserver | file_write_throttled | File write throttled |
sqlserver | ghost_cleanup_task_packet_enqueue | A task packet is enqueued |
sqlserver | ghost_cleanup_task_process_packet | A task packet is dequeued and processed |
sqlserver | ghost_cleanup_task_process_pages_for_db_packet | Purge the pages that contain ghost records found in a database |
sqlserver | ghost_cleanup_task_start | Ghost cleanup task start |
sqlserver | ghost_cleanup_task_suspend | Ghost cleanup task suspend |
sqlserver | hadr_tds_synchronizer_payload_skip | Hadron Tds Listener Synchronizer skipped a listener payload because there were no changes since the previous payload. |
sqlserver | lock_request_priority_state | The priority state of a lock request |
sqlserver | log_cache_write_block | Writing a log block to the log cache |
sqlserver | metadata_ddl_add_column | Occurs when an ALTER TABLE ADD column operation is updating base index. |
sqlserver | metadata_ddl_alter_column | Occurs when an ALTER TABLE ALTER column operation is updating base index. |
sqlserver | metadata_ddl_drop_column | Occurs when an ALTER TABLE DROP column operation is updating base index. |
sqlserver | mixed_extent_activation | Track mixed extent activation and deactivation operations. |
sqlserver | mixed_extent_allocation | Track mixed extent allocation operations |
sqlserver | mixed_extent_deallocation | Track mixed extent deallocation operations. |
sqlserver | mixed_page_allocation | Track mixed page allocation operations |
sqlserver | mixed_page_deallocation | Track mixed page allocation operations |
sqlserver | mixed_page_scan_file | Track the activity of SGAM scans for mixed page allocation |
sqlserver | mixed_page_scan_page | Track the activity of SGAM scans for mixed page allocation |
sqlserver | mixed_page_skipextent | Track the activity of SGAM scans for mixed page allocation |
sqlserver | natively_compiled_proc_execution_started | Fired before a natively compiled procedure is executed. |
sqlserver | natively_compiled_proc_slow_parameter_passing | Occurs when a Hekaton procedure call dispatch goes to slow parameter passing code path |
sqlserver | process_killed_by_abort_blockers | A process is killed by an ABORT = BLOCKERS DDL statement |
sqlserver | query_execution_batch_hash_aggregation_finished | Occurs at the end of batch hash aggregation. |
sqlserver | query_execution_batch_hash_children_reversed | Occurs each time when hash join reverses build and probe side while processing data spilled to disk. |
sqlserver | query_execution_batch_hash_join_spilled | Occurs each time when hash join spills some data to disk in batch processing. |
sqlserver | query_optimizer_estimate_cardinality | Occurs when the query optimizer estimates cardinality on a relational expression. |
sqlserver | query_optimizer_force_both_cardinality_estimation_behaviors | Both traceflags 2312 and 9481 were enabled, attempting to force both old and new cardinality estimation behaviors at the same time. The traceflags were ignored. Disable one or both of the traceflags. |
sqlserver | query_store_failed_to_capture_query | Fired if the Query Store failed to capture query. The Query Store will not track statistics for this query |
sqlserver | query_store_failed_to_load_forced_plan | Fired if the query failed to load forced plan from QDS. Forcing policy will not be applied |
sqlserver | query_store_persist_on_shutdown_failed | Occurs when SQL Server fails to store dirty entries in Query Store on database shutdown. |
sqlserver | recovery_indirect_checkpoint | Chose to enqueue an indirect checkpoint as we are near our recovery target |
sqlserver | remove_database_cache | Remove database cache |
sqlserver | server_max_workers | Occurs when a request enqueue fails because a worker was not guaranteed. |
sqlserver | session_recoverable_state_change | Occurs when the server determines a state change in term of session recovery on a connectionresilency-enabled connection. |
sqlserver | sql_transaction_commit_single_phase | Occurs when a sql transaction is committed with single phase commit. |
sqlserver | xfcb_blob_properties_obtained | Windows Azure Storage blob property is obtained from response header. |
sqlserver | xfcb_failed_request | Failed to complete a request to Windows Azure Storage. |
sqlserver | xfcb_header_obtained | Response header is obtained from request to Windows Azure Storage. |
sqlserver | xfcb_read_complete | Read complete from Windows Azure Storage response. |
sqlserver | xfcb_request_opened | A request is opened to Windows Azure Storage. |
sqlserver | xfcb_send_complete | Request send to Windows Azure Storage is complete. |
sqlserver | xfcb_write_complete | Request send to Windows Azure Storage is complete. |
sqlserver | xtp_create_procedure | Occurs at start of XTP procedure creation. |
sqlserver | xtp_create_table | Occurs at start of XTP table creation. |
sqlserver | xtp_db_page_allocation_allowed | Indicates that page allocations for the database are allowed. |
sqlserver | xtp_db_page_allocation_disallowed | Indicates that page allocations for the database are disallowed due to memory pressure. |
sqlserver | xtp_deploy_done | Occurs at completion of XTP object deployment. |
sqlserver | xtp_matgen | Occurs at start of MAT generation. |
sqlserver | xtp_offline_checkpoint_scan_start | Fired by XTP offline checkpoint when the checkpoint thread begins. |
sqlserver | xtp_offline_checkpoint_scan_stop | Fired by XTP offline checkpoint when the checkpoint thread stops. |
sqlserver | xtp_recover_done | Occurs at completion of log recovery of XTP table. |
sqlserver | xtp_recover_table | Occurs at start of log recovery of XTP table. |
sqlserver | xtp_storage_table_create | Occurs at just before the XTP storage table is created. |
ucs | ucs_connection_rejected_by_proxy_whitelist | After a connection attempt to the UCS proxy endpoint is rejected by whitelist check |
ucs | ucs_proxy_connect_next_hop | UCS proxy next hop connection |
ucs | ucs_proxy_receive_proxy_connect_message | UCS proxy receive proxy connect message |
ucs | ucs_proxy_route_add | UCS proxy route added |
ucs | ucs_proxy_route_disable | UCS proxy route disabled |
ucs | ucs_proxy_route_refresh | UCS proxy route refreshed |
ucs | ucs_proxy_send_proxy_connect_message | UCS proxy send proxy connect message |
XtpCompile | cgen | Occurs at start of C code generation. |
XtpCompile | invoke_cl | Occurs prior to the invocation of the C compiler. |
XtpCompile | mat_export | Occurs at start of MAT export. |
XtpCompile | pitgen_procs | Occurs at start of PIT generation for procedures. |
XtpCompile | pitgen_tables | Occurs at start of PIT generation for tables. |
XtpEngine | after_changestatetx_event | Fires after transaction changes state. |
XtpEngine | alloctx_event | |
XtpEngine | attempt_committx_event | Is raised when a transaction is asked to commit. |
XtpEngine | before_changestatetx_event | Fires before transaction changes state. |
XtpEngine | dependency_acquiredtx_event | Raised after transaction takes a dependency on another transaction. |
XtpEngine | endts_acquiredtx_event | Fires after transaction acquires an end timestamp. |
XtpEngine | gc_base_generation_evaluation | Indicates that an evaluation of updating the GC base generation has been made. |
XtpEngine | gc_base_generation_updated | Indicates that the oldest active transaction hint used for calculating the GC base generation has been updated. |
XtpEngine | gc_cycle_completed | Indicates that a GC notification has been enqueued. |
XtpEngine | gc_notification | Indicates that GC is processing a notification. |
XtpEngine | redo_single_hk_record | Redo on a HK log record |
XtpEngine | trace_add_delta_filter_begin | Adding delta filter. |
XtpEngine | trace_add_duplicate_delta_filter | Adding duplicate delta filte. |
XtpEngine | trace_adding_tx_filter | Adding tx filter. |
XtpEngine | trace_begin_close_ckpt_processing | Beginning close checkpoint processing: checkpoint flush fence = $2 ($3 state). |
XtpEngine | trace_cfd_entry_deleted | Cleaned up data CFD entry for file $2 $3. |
XtpEngine | trace_cfd_rows_created | Successfully created CFD rows. |
XtpEngine | trace_cfdtable_dump | CfdTable dump |
XtpEngine | trace_cfp_handles_opened | Opened CFP handles for {data: $2 delta: $3} in section $4. |
XtpEngine | trace_cfp_removed_from_freelist | Removed CFP from freelist for destination file. |
XtpEngine | trace_checkpoint_file_flush | Indicating that a file has started a flush operation. |
XtpEngine | trace_checkpoint_load_begin | Starting Checkpoint load with checkpoint timestamp=$2. |
XtpEngine | trace_checkpoint_write_io | Logs an event indicating that an IO has started (when event_type == IoStarted) or that an IO has completed (when event_type == IoComplete). |
XtpEngine | trace_checksum_validation_succeeded | Checksum validation for Hekaton checkpoint file during backup succeeded. |
XtpEngine | trace_ckpt_close_begin | Closing checkpoint in active/passive state. |
XtpEngine | trace_ckpt_close_signaled | Signaling checkpoint close. The checkpoint policy object is flagged to close the current checkpoint |
XtpEngine | trace_ckpt_closed | Checkpoint closed |
XtpEngine | trace_ckpt_load_thread_stats | Statistics of the checkpoint load thread. |
XtpEngine | trace_ckpt_mrt_dump | Mrt dump |
XtpEngine | trace_ckpt_serialization_state_dump | Checkpoint serialization state dump. |
XtpEngine | trace_ckpt_stream_io_stats | Io statistics for this particular stream. |
XtpEngine | trace_close_file_log_rec_created | Successfully created close file log record. |
XtpEngine | trace_data_file_pages_txs | This event gives the number of pages and the transaction range for a data file. |
XtpEngine | trace_data_from_page_deserialized | Deserialized data from page $2 in file $4. |
XtpEngine | trace_data_row_modified | Data row modified/deleted. |
XtpEngine | trace_delete_cfd_rows_begin | Matching MRT not found for the Data CFD. Deleting corresponding CFD Rows. |
XtpEngine | trace_delete_DeletedObjectTable_row_begin | Deleting row from DeletedObjectTable in the context of transaction: $5. |
XtpEngine | trace_delete_merged_source_file | Deleting merged source $7 file $5:$6. DeleteLSN = $2:$3:$4. |
XtpEngine | trace_delete_mrtrow_for_target_begin | Deleting mrt row for target. |
XtpEngine | trace_delta_entry_skipped | Skipping delta {$3, $4, $5} because it is beyond the snapshot checkpoint: $6. |
XtpEngine | trace_delta_file_future_count_rows_pages_update | Updating future count pages and count rows for delta file. |
XtpEngine | trace_delta_file_lastgood_count_rows_pages_update | Updating lastgood count pages and count rows for delta file. |
XtpEngine | trace_delta_file_pages_txs | This event gives the number of pages and the transaction range while reading a delta file for a particular operations. |
XtpEngine | trace_delta_file_processed | Completed processing of Delta file. |
XtpEngine | trace_delta_file_set_clean_or_dirty | Delta file set clean or dirty. |
XtpEngine | trace_delta_filter_removed | Removing delta filter from DeltaFilterMap due to transaction filter. |
XtpEngine | trace_delta_watermark_removed | Removed delta watermark. |
XtpEngine | trace_delta_watermark_updated | Updated delta watermark row for FileId: $4. |
XtpEngine | trace_deserialize_chained_records | Info pertaining to the deserialized content of the mrtRecords chain. |
XtpEngine | trace_deserialize_data_from_page_begin | Deserializing data from page $2 in file $4. |
XtpEngine | trace_deserialize_source_delta_file_begin | Deserializing source delta file $4: page range [$2, $3). |
XtpEngine | trace_dump_cfd_row | Dump cfd row |
XtpEngine | trace_dump_mrt_row | Merge Request |
XtpEngine | trace_file_close_skipped | Skipping file close for $2 as the cfp had been merged, It could not be found in the Storage array. |
XtpEngine | trace_file_skipped | Skipping file $4 due to reason $5. |
XtpEngine | trace_file_skipped_with_tx_range | Skipping file $5 with TX range { $3 -> $4 } due to MaxTxId == InvalidTxId. |
XtpEngine | trace_first_page_header_flushed | Flushed first page header for file. |
XtpEngine | trace_flush_io_operation_for_delta_file | This trace is trying to catch pages issued for flush that didn't make it to the file, or were lacking the correct DeltaWatermark Timestamp. |
XtpEngine | trace_flush_skipped_for_closed_data_file | Skipping flush for data file ('$2') as it is closed. |
XtpEngine | trace_future_count_pages_update | Updating future count pages. |
XtpEngine | trace_initial_merge | Doing initial merge for destination $3 at CheckpointTs: $2. |
XtpEngine | trace_insert_DeletedObjectTable_row_begin | Inserting row to DeletedObjectTable in the context of transaction: $5. |
XtpEngine | trace_intialize_worker_file_begin | Processing file close. |
XtpEngine | trace_mark_data_file_closed | Marking the active data file as closed to prevent the offline worker fom flushing them. |
XtpEngine | trace_merge_cancelled | Merge was cancelled for destination $2. |
XtpEngine | trace_merge_complete_log_rec_ignored | MergeCompleteLogRecord ignored for destination file $3. |
XtpEngine | trace_merge_complete_log_rec_processed | Successfully processed MergeCompleteLogRecord. |
XtpEngine | trace_mrt_dump | Mrt dump. |
XtpEngine | trace_mrt_row_inplace_update | Mrt row updated in place. |
XtpEngine | trace_mrtrow_not_found | Could not find row with Destination ID: $2 in MRT table. |
XtpEngine | trace_new_delta_watermark_inserted | Inserted new delta watermark row for FileId $3. |
XtpEngine | trace_offline_process_stale_merge_item | Offline processing of stale merge item {$2, $3} Destination $4. |
XtpEngine | trace_pending_tx | Pending Transaction |
XtpEngine | trace_populate_storage_array | Populating the storage array. |
XtpEngine | trace_post_writefile | This event is fired after issuing IO on the proxy. |
XtpEngine | trace_process_file_close_begin | Processing file close. |
XtpEngine | trace_process_merge_complete_log_record | Process merge complete log record. |
XtpEngine | trace_process_merge_request | Process Merge Request |
XtpEngine | trace_process_source_file | Processing source $4 file $2. |
XtpEngine | trace_process_target_file | Processing target $4 file $2. |
XtpEngine | trace_queue_merge_work_item | Queue work item: {$2, $3} $5 - $6 with CheckpointTs as $4. |
XtpEngine | trace_recovered_existing_file | Recovered exising files files allocated during forward processing that are not already part of the CCB. |
XtpEngine | trace_resyncing_state_of_file | Resyncing state of file $4 ('$5') post recovery: $2 pages, $3 rows. |
XtpEngine | trace_serialization_object_cleaned | Cleaned serialization object. |
XtpEngine | trace_serialize_abort_transaction_begin | Serializing the abort transaction filter to the delta cache corresponding to its own transaction range. |
XtpEngine | trace_serialize_tx_begin | Serialize TS begin for completing the checkpoint in active state. |
XtpEngine | trace_serialize_tx_end | Serialize TS end for completing the checkpoint. |
XtpEngine | trace_serialized_delta_rows_to_file | Serialized delta rows to file. |
XtpEngine | trace_source_file_load_stats | Load stats for source files in merge. |
XtpEngine | trace_storage_refreshed_for_file | Storage refreshed for file. |
XtpEngine | trace_uninitialize_file | uninitializing_file |
XtpEngine | trace_wrote_delta_watermark | Wrote delta watermark for file: $3, checkpointTs: $2. |
XtpEngine | waiting_for_dependenciestx_event | Raised when we have explicitly waited for dependencies to clear. |
XtpEngine | xtp_before_create_log_record | Fired before submitting a request to the host to serialize a non-transactional log record. |
XtpEngine | xtp_checkpoint_file_flush | Indicates the point at which a given file has been flushed to disk. |
XtpEngine | xtp_checkpoint_file_flush_complete | Indicates the point at which all in-flight buffers have been flushed to disk. |
XtpEngine | xtp_checkpoint_file_read | Indicates reading of a file in XTP checkpoint recovery. |
XtpEngine | xtp_checkpoint_worker_active_log | Indicates that the checkpoint worker has started processing the log records in the active portion of the log. |
XtpEngine | xtp_checkpoint_worker_started | Indicates that the checkpoint worker has started processing the log records. |
XtpEngine | xtp_checkpoint_worker_stopped | Indicates that the checkpoint worker has started processing the log records. |
XtpEngine | xtp_checkpoint_write_io | Indicates that the checkpointing subsystem has issued or completed a write IO. |
XtpEngine | xtp_complete_checkpoint | Indicates the begin and end of complete checkpoint processing. |
XtpEngine | xtp_create_log_record | Fires when the XTP engine creates a log record. |
XtpEngine | xtp_delta_marked_dirty | Indicates that a delete/update was serialized and marked a delta file dirty |
XtpEngine | xtp_merge_complete_log_record | Indicates merge complete log record is posted to the log. |
XtpEngine | xtp_merge_log_complete_log_record | Indicates merge log record was interpreted by the offline worker. |
XtpEngine | xtp_merge_process_log_record | Indicates merge log record was interpreted by the offline worker. |
XtpEngine | xtp_merge_request_log_record | Indicates merge request log record is posted to the log. |
XtpEngine | xtp_merge_request_started | Indicates merge request has been picked up by the merge worker thread. |
XtpEngine | xtp_root_deserialized | Indicates that the load of a checkpoint root is complete. |
XtpEngine | xtp_root_serialized | Indicates that the write of the checkpoint root is complete. |
XtpRuntime | bind_md | Occurs prior to binding metadata for a memory optimized table. |
XtpRuntime | bind_tables | Occurs prior to binding tables for a natively compiled procedure. |
XtpRuntime | create_table | Occurs prior to creating memory optimized table. |
XtpRuntime | deserialize_md | Occurs prior to deserializing metadata. |
XtpRuntime | load_dll | Occurs prior to loading the generated DLL. |
XtpRuntime | recover_done | Occurs at completion of checkpoint recovery of a memory optimized table. |
XtpRuntime | recover_table | Occurs at start of checkpoint recovery of a memory optimized table. |
XtpRuntime | serialize_md | Occurs prior to serializing metadata. |
XtpRuntime | unload_dll | Occurs prior to unloading the generated DLL. |
Happy tracing,
Andreas
Comparing Extended Events vs SQL Trace – or why SQL Trace & Profiler are just a thing of the past :-)
Nov 21st
Extended Events vs SQL Trace im Vergleich – oder warum SQL Trace & Profiler einfach von gestern sind :-)
(de) Für alle, die noch mit dem alten Werkzeug SQL Server Profiler (Profiler ist das Frontend für SQL Trace, gestartet mit sp_trace_create) arbeiten, und sich noch nicht für die neue Technologie entscheiden konnten, hier eine kleine Entscheidungshilfe. Was Extended Events (XEvents) besser als SQLTrace machen: |
(en) For those of you who are still working with the old tool SQL Server Profiler (Profiler is the frontend for SQL Trace, started with sp_trace_create) and have not quite been able to decide for the new technology, here is some decision guidance. What Extended Events (XEvents) do better than SQL Trace: |
|
|
*1
Extended Events vs. SQL trace Events per Version
*2
As an example: For Service Broker there are:
15 Events in SQLTrace vs. 44 Events in XEvents (SQL Server 2012 SP1)
Ich hoffe, das hilft dem einen oder anderen, die alte Gewohnheit abzulegen, und die kleine Lernphase in Kauf zu nehmen. Eine Liste aller Extended Events in SQL Server 2012 SP1 samt Ihrem Gegenstück in SQL Trace, welche man für Migrationszwecke (SQLTrace -> XEvent Trace) verwenden kann findet sich in dieser Seite. (Aufgrund einer Größenbeschränkung passte sie nicht mehr hier hinein.) |
I hope this helps some of you to unlearn the old habit and accept the little learning phase.
A list of all Extended Events in SQL Server 2012 SP1 including its counter piece in SQL Trace that can be used for migration purposes (SQLTrace-> XEvent Trace) is available on this page. (Due to size restriction, it didn’t fit in here anymore.) |
> Mapping Extended Events with sys.trace_xe_event_map to SQL Trace <
Happy better Tracing
Andreas
PS: Für 2014 befindet sich mit den SQL Server Master-Classes zum Thema „Tracing mit Extended Events“ auch die nächste Runde der Trainings zu diesem Thema in der Planung :-) |
P.S. For 2014, the next round of training in this topic is being developed in conjunction with SQL Server Master-Classes on the topic “Tracing with Extended Events” :-) |
Performance overhead of tracing with Extended Event targets vs SQL Trace under CPU Load
Aug 14th
Leistungseinbußen beim Tracing, Extended Event Ziele gegen SQL Trace unter CPU Last
(DE) Inspiriert durch den Blog Post Post “Measuring Observer Overhead of SQL Trace vs. Extended Events” von Jonathan Kehayias, war ich neugierig, ein paar weitere Benchmarks durchzuführen. Die Resultate habe ich bereits in meinen SQL Server Master-Class Workshops zu XEvents und auf der PreCon des SQLSaturday #230 präsentiert. Nun teile ich sie endlich hier. Vor allem folgende Aspekte interessierten mich:
Da ich einmal das Setup bereit hatte, habe ich mich entschieden, alle anderen Ziele ebenfalls aufzunehmen, sowie einige grundlegende Options-Variationen, um einen kompletten Überblick zu erhalten, der in sich vergleichbar ist. |
(EN) Inspired by the Blog Post “Measuring Observer Overhead of SQL Trace vs. Extended Events” by Jonathan Kehayias, I was curious to do some more benchmarking. I was interested in the following aspects, in particular:
Besides that, once I had the setup ready, I decided to include all the other targets and some basic option-variations, to have a complete overview that is comparable in itself. |
Ich entschied mich für ein System, das bereits unter CPU Druck steht – die CPU Verwendung liegt fast vollständig bei 100%. |
I also decided for a test of a system that’s already under CPU pressure - so the Processor Usage is almost constantly hitting 100%. |
Test Umgebung “Einfach aber physikalisch” 1 Windows 2008 R2 Server mit SQL Server 2012 SP1. Keine laufenden Dienste abseits des minimal Notwendigem, um Seiten-Effekte zu verhindern. Die Arbeitslast war ein Mix von meist kurzen (Sub-Sekunden) aber auch einigen Abfragen mit mehreren Sekunden Laufzeit, (Ad Hoc und gespeicherte Prozeduren), um eine einigermaßen realistische Decision-Support-System Last zu simulieren – jedoch ohne Datenänderungen. |
Test Environment “Simple but physical” 1 Windows 2008 R2 Server with SQL Server 2012 SP1. No running services above the minimum required, to avoid side-effects. The workload was a mixture of mostly short (sub-second) but also several queries of several second runtime, (Ad Hoc and stored procedures) to simulate a somehow realistic decision-support-systems’ load – no data changes though. |
Trace-Konfigurationen Ich habe mich für eine einfache, aber nicht unübliche Trace entschieden, die typischerweise viele Daten sammelt. |
Trace-Configurations I decided for a simple but not uncommon trace that typically collects a lot of data. The trace collects only 2 events: |
- Lock acquired
- Lock released
Ausgefiltert: System-Sitzungen und andere Datenbanken als die, die hier unter Last steht. Gesammelte Daten: |
Filtered out: System-sessions and other databases than the one under load here. CollectedData: |
- Lock:Released BinaryData
- Lock:Released DatabaseID
- Lock:Released TransactionID
- Lock:Released SPID
- Lock:Released ObjectID
- Lock:Released Mode
- Lock:Released BigintData1
- Lock:Released IntegerData2
- Lock:Released ObjectID2
- Lock:Released Type
- Lock:Released OwnerID
- Lock:Released IsSystem
- Lock:Acquired BinaryData
- Lock:Acquired DatabaseID
- Lock:Acquired TransactionID
- Lock:Acquired SPID
- Lock:Acquired Duration
- Lock:Acquired ObjectID
- Lock:Acquired IsSystem
- Lock:Acquired Mode
- Lock:Acquired BigintData1
- Lock:Acquired IntegerData2
- Lock:Acquired ObjectID2
- Lock:Acquired Type
- Lock:Acquired OwnerID
Das gab mir den Vorteil, einen validen Test für alle Extended Event Targets, die bereitgestellt werden, zu haben (lediglich ETW-Tracing wurde ausgelassen) - speziell das Histogramm und die „Ereignispaarbildung“ (Pair-Matching)(Die wenigen Ereignisse von Lock-Escalation störten mich bewusst nicht). Die folgenden Trace Technologien und -Ziele wurden verwendet: |
This gave me the advantage to have a valid test for all Extended Event Targets that are provided (only ETW-Tracing was left out) - specifically histogram and Pair Matching (I did not care about the few occasions of lock escalation on purpose). The following Trace technologies and -Targets were used: |
- XEvent Trace, Target: None
- XEvent Trace, Target: Ring Buffer
- XEvent Trace, Target: File
- XEvent Trace, Target: Counter
- XEvent Trace, Target: Histogram
- Here I chose to filter on lock acquired and to bucketize on the lock mode
- XEvent Trace, Target: Pair Matching
- Guess, what I matched here ;-)
- SQL Trace, Target: None (I had to trick a little bit to get this to work, but I made sure, the behavior of this unsupported configuration did not distort the results: It’s just a SQL Trace with no Provider processing, so all events are lost by nature.)
- SQL Trace, Target: File
- SQL Profiler, remotely
Für jedes Ziel der Extended Events habe ich 4 Varianten getestet, basierend auf den Sitzungsoptionen: |
For each Extended Event Target I tested 4 variations, based on the session options: |
- Single event loss
- No event loss
- Multiple event loss
- Single event loss with causality tracking on
Alle anderen Optionen verwendeten die Standardwerte für diese Tests. |
All other options were using the defaults for these tests. |
Picture: the 24 prepared traces
Die Ergebnisse Ich habe die Gesamtlaufzeit für die Workload sowie Batches pro Sekunde und CPU Zeit % gemessen. Und hier ist die vollständige Tabelle mit den Ergebnissen: |
The Results I measured the total time for the workload to take as well as batches per second and CPU time %. And here is the complete Table of Results: |
EL = Ereignisverlust. S = Verlust einzelner Ereignisse, N = Kein Ereignisverlust, M = Verlust mehrerer Ereignisse CT = Kausalitätsverfolgung (Causality Tracking) Ein |
EL = Event Loss. S = Single event loss, N = No event loss, M = Multiple event loss CT = Causality Tracking On |
Um Zeit und Platz zu sparen, konzentriere ich mich auf die Benchmarks mit den Optionen single event loss und no event loss ohne Causality Tracking. Tatsächlich waren die Kosten von Causality Tracking weniger als 1% für alle Ziele. Hier ist daher das komprimierte Ergebnis: |
For the sake of saving space and time, I will focus on the benchmarks with the options single event loss and no event loss without causality tracking. In fact, the cost of causality tracking was less than 1% for all targets. So this is the more condensed result: |
Was wir demnach sagen können, ist: |
What we can tell from that, is: |
|
|
Ein schönes Bild der SQL Trace Architektur findet sich in den BOL: msdn.microsoft.com/en-us/library/hh245121.aspx |
You have a nice picture of the SQL Trace Architecture in BOL: msdn.microsoft.com/en-us/library/hh245121.aspx |
Hinzufügen von Filtern Wie ändert das Hinzufügen von Filtern den Beobachter-Overhead? Es ist wichtig, zu wissen, was ich als Filter verwendet habe: Die database_id / source_database_id. |
Adding Filters How does adding filters change the observer overhead? Now it is important to note what I actually used as a filter: The database_id / source_database_id. |
Dieses Mal konzentriere ich mich nur auf „single“ und „no event loss“. Die Resultate des gefilterten Tracing, ohne tatsächlich irgendwelche Events zu speichern/protokollieren, ist wie folgt: |
This time I only focused on “single” and “no event loss”. The results of Filtered Tracing, without actually storing/logging any events is as follows: |
Für diesen Test habe ich SQL Profiler nicht ausgeführt, Pardon. Sie wissen bis hierhin wahrscheinlich schon, warum. ;-) |
I did not run SQL Profiler for this Test, forgive me. You probably know why by now. ;-) |
Wait-Types für Extended Events Ein weiterer Aspekt, der mich interessierte, waren die XEvent Wait-Typen, die auftreten würden, wenn man Extended Event Sessions ausführt. (Die Warte-Statistiken sind oft die Basis für Performance-Analysen) Allgemein sieht man folgende: |
Wait-Types for Extended Events Another aspect I was interested in were the specific XEvent wait-types which would occur when running Extended Event sessions. In general, you will see the following: |
Beim Starten einer Sitzung: |
When starting a session: |
PREEMPTIVE_XE_TARGETINIT
PREEMPTIVE_XE_CALLBACKEXECUTE
PREEMPTIVE_XE_SESSIONCOMMIT
Beim Beenden einer Sitzung: |
When stopping a session: |
PREEMPTIVE_XE_SESSIONCOMMIT
XE_BUFFERMGR_ALLPROCESSED_EVENT
PREEMPTIVE_XE_CALLBACKEXECUTE
Während Sessions aktiv sind, sieht man: |
While running sessions you will see: |
XE_DISPATCHER_WAIT - From BOL: Occurs when a background thread that is used for Extended Events sessions is waiting for event buffers to process. - You should be able to safely ignore this unless you believe a problem is occurring with processing of events for async targets. Since this works on a queue you can have bursts of high wait times especially when no XEvent sessions are active.
XE_TIMER_EVENT – From BOL: Used to indicate a background task is waiting for "expired" timers for internal Xevent engine work. - You should be able to safely ignore this one. Just used by the Xevent engine for internal processing of its work. If something was possibly wrong with Xevent processing you might see if this thread ever "wakes up"
Beim Starten eines File-Targets sieht man außerdem: |
When starting the File target you will also see: |
PREEMPTIVE_XE_TARGETINIT
Wenn man eine Sitzung mit der No Event Loss Option ausführt, sieht man gegebenenfalls: |
If you run a session with the no event loss option you might see |
XE_BUFFERMGR_FREEBUF_EVENT - which by BOL means: An Extended Events session is configured for no event loss, and all buffers in the session are currently full. This can indicate that the buffers for an Extended Events session are too small, or should be partitioned.
So. Ich hoffe das war interessant. Man kann noch weitere und andere Schlüsse aus den Ergebnissen ziehen. Immer im Hinterkopf zu behalten, ist, dass das ein sehr spezielles Szenario ist, wo keine CPU Reserven zur Verfügung stehen, so das der Trace/Beobachter-Overhead sich manifestieren muss – keine Chance, als eben (CPU) Ressourcen wegzunehmen. |
So, I hope this was interesting for you. You may draw more and other conclusions out of these results. Remember though: this is a very specific scenario with no CPU reserves, so the Tracing/observer overhead has to show up – no choice but to take away (CPU) resources. |
Happy Tracing
Andreas
PS: I just discovered that my MCM and SQLSkills-class buddy Steven Ormrod also has recently blogged about the performance overhead from a SQL Trace to remote file share on production here: stevenormrod.com/2013/06/remote-trace-slowing-down-server/
Extended Event File Target size vs SQL Server Trace trace file - a comparison
Jun 9th
No big science, more out of curiosity this time..
The Extended Events File Target for SQL Server saves data using xml, which as is well known, is a bit “chatty”. A student in my recent SQL Server Master-Class workshop on extended events came up with the question for how much (more) space he would have to account for using Extended Events with a file target. Although this depends greatly on the specific events and possibly actions, selected, I was a bit curious myself and decided for a small test.
Both, the old and deprecated SQL Server Trace and Extended Events can save the data in a file, so it’s easy to compare what difference in size the new format will make.
I set up a SQL Server Trace that is almost identical to an Extended Events Trace. (You will see why “almost”.)
I had to choose a very simple Trace, so the customizable columns of extended events would not make the comparison unequal and ended up with a trace that captures SP:Starting/SP:Completed with the following columns:
You will see why I collect Source/DatabaseID twice later on.
Of course I used a lightweight Server-Trace, although for the purpose of this comparison it would not have mattered.
The SQL Trace definition:
exec sp_trace_setevent@TraceID, 43, 3, @on
exec sp_trace_setevent@TraceID, 43, 5, @on
exec sp_trace_setevent@TraceID, 43, 12, @on
exec sp_trace_setevent@TraceID, 43, 13, @on
exec sp_trace_setevent@TraceID, 43, 22, @on
exec sp_trace_setevent@TraceID, 43, 28, @on
exec sp_trace_setevent@TraceID, 43, 34, @on
exec sp_trace_setevent@TraceID, 43, 48, @on
exec sp_trace_setevent@TraceID, 43, 62, @on
exec sp_trace_setevent@TraceID, 42, 3, @on
exec sp_trace_setevent@TraceID, 42, 5, @on
exec sp_trace_setevent@TraceID, 42, 12, @on
exec sp_trace_setevent@TraceID, 42, 22, @on
exec sp_trace_setevent@TraceID, 42, 28, @on
exec sp_trace_setevent@TraceID, 42, 34, @on
exec sp_trace_setevent@TraceID, 42, 62, @on
declare @intfilter int
declare @bigintfilter bigint
set @intfilter = 7
exec sp_trace_setfilter@TraceID, 62, 0, 0, @intfilter
As you might see the trace includes a filter, which is for a specific database ID.
The Extended Event Trace session looks like that:
CREATE EVENT SESSION [ModulesStartEnd_ToFile]
ON SERVER
ADD EVENT sqlserver.module_start(
WHERE ([source_database_id]=(7))),
ADD EVENT sqlserver.module_end(
WHERE ([source_database_id]=(7)))
ADD TARGET package0.event_file
(SET filename=N'R:\Tracing\XE\ModulesStartEnd.xel', max_file_size=(10240))
WITH (MAX_MEMORY=4096 KB, EVENT_RETENTION_MODE=NO_EVENT_LOSS, MAX_DISPATCH_LATENCY=30 SECONDS,MAX_EVENT_SIZE=0 KB, MEMORY_PARTITION_MODE=NONE, TRACK_CAUSALITY=OFF, STARTUP_STATE=OFF)
You may know, that Extended Events include certain columns by default, and for module_start/end, this includes offset and offset_end.
Those two columns are not available for SP_Staring/SP:Completed in SQL Trace. Since they are both integers, I decided to include another column, DatabaseID, into the SQLTrace. SQL Trace also includes the SPID by default, which cannot be deselected, therefore those two columns should equalize it.
Both traces were started before the workload which ran for a while. At the end, the same number of events have been logged by both technologies in parallel.
SQL Trace event count:
XEvent Trace event count:
100644 + 100644 = 201288, so both captured the exact same events. :-)
So, and now to the final question: what size are the files?
See yourself:
Size in Megabytes:
(The numbers in MB are the real size, whereas windows explorer shows the size on disk.)
That’s a difference of 5.32MB or in other words 29.13%.
And this is what one single module_start-event for a function call in XEvents looks like:
<eventname="module_start"package="sqlserver"timestamp="2013-06-08T18:41:48.780Z">
<dataname="source_database_id"><value>7</value></data>
<dataname="object_id"><value>103671417</value></data>
<dataname="line_number"><value>1</value></data>
<dataname="offset"><value>0</value></data>
<dataname="offset_end"><value>-1</value></data>
<dataname="object_type"><value><![CDATA[TF]]></value></data>
<dataname="object_name"><value><![CDATA[ufnGetContactInformation]]></value></data>
<dataname="statement"><value></value></data>
</event>
The content is self-explanatory, as xml is supposed to be, and the overhead in size is no surprise at all.
Keep in mind that this post is purely on comparing file sizes, and not performance or features. There are good reasons that SQL Trace & Profiler have been deprecated, and Extended Events in SQL Server 2012 overcomes SQL Trace & Profiler by far, in performance as well as in flexibility/usability.
For a performance overhead comparision check out my recently published benchmark blog post: "Performance overhead of tracing with Extended Event targets vs SQL Trace under CPU Load".
So whenever performance matters, remember to set session options appropriately and if the amount of events is high, do not use your slowest volume for the file target - same as for all other tracing activities anyways.
happy tracing,
Andreas