forked from mgreen27/mgreen27.github.io
-
Notifications
You must be signed in to change notification settings - Fork 0
/
sitemap.xml
2174 lines (1527 loc) · 204 KB
/
sitemap.xml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
<channel>
<title>Matt's DFIR Blog</title>
<description>A blog for DFIR thoughts, research and for my future reference</description>
<link>https://mgreen27.github.io/</link>
<atom:link href="https://mgreen27.github.io/sitemap.xml" rel="self" type="application/rss+xml"/>
<pubDate>Sun, 13 Nov 2022 13:27:02 +0000</pubDate>
<lastBuildDate>Sun, 13 Nov 2022 13:27:02 +0000</lastBuildDate>
<generator>Jekyll v3.8.5</generator>
<item>
<title>WMI Event Consumers: what are you missing?</title>
<description><p>WMI Eventing is a fairly well known technique in DFIR, however some
tools may not provide the coverage you expect. This article covers
WMI eventing visibility and detection including custom namespaces.</p>
<p><img src="/static/img/article_images/2022-01-12-wmi-eventing/00SelectionBias.png" alt="Selection bias in WWII: missing what is not collected." /></p>
<h2 id="background">Background</h2>
<p>There has been a fair bit of research and observations of WMI eventing
in field over the last years. In short, a WMI event consumer is a
method of subscribing to certain system events, then enabling an action
of some sort. Common adversary use cases may include persistence, privilege
escalation, or as a collection trigger. Represented as ATT&amp;CK T1546.003
this technique has been observed in use from APT, through to trash-tic
worm and coin miner threats.</p>
<p><img src="/static/img/article_images/2022-01-12-wmi-eventing/01WMIOverview.png" alt="WMI Eventing: 3 system classes" /></p>
<p>There are three system classes in every active event consumer:</p>
<ol>
<li>__EventFilter is a WQL query that outlines the trigger event of
interest.</li>
<li>__EventConsumer is an action to perform upon triggering an event.</li>
<li>__FilterToConsumerBinding is the registration mechanism that binds
a filter to a consumer.</li>
</ol>
<p>Most detection will focus on collecting the WMI classes in root/subscription
and, in some tools root/default WMI namespaces.</p>
<p><img src="/static/img/article_images/2022-01-12-wmi-eventing/02Autoruns.png" alt="Autoruns 14.07: detects root/default and root/subscription namespace WMI event consumers" /></p>
<h4 id="custom-namespaces">Custom Namespaces</h4>
<p>At Blackhat 2018 Lee Christensen and Matt Graeber presented “Subverting
Sysmon: Application of a Formalized Security Product Evasion Methodology”.
This excellent talk focused on defense evasion methodology and highlighted
potential collection gaps in telemetry tools around WMI eventing. In this
case, the focus was on Sysmon behaviour of collection only in
root/subscription, interestingly, it also highlighted the possibility to
implement __EventConsumer classes in arbitrary namespaces.</p>
<p>It is detection of WMI Event Consumers in arbitrary namespaces that I’m going
to focus. For anyone interested in testing I have written
<a href="https://github.com/mgreen27/mgreen27.github.io/blob/master/static/other/WMIEventingNoisemaker/WmiEventingNoisemaker.ps1">a script to generate WMI event consumers</a>.
This script wraps several powershell functions released during the Black
Hat talk to test creating working event consumers.</p>
<p>First step was to create a custom namespace event consumer. In this
instance I selected the namespace name <code class="language-plaintext highlighter-rouge">totallylegit</code> and attached an
ActiveScript event consumer.</p>
<p><img src="/static/img/article_images/2022-01-12-wmi-eventing/04WMIEventGeneration.png" alt="WMIEventingNoismaker.ps1:Generate active script EventConsumer" /></p>
<h2 id="collection">Collection</h2>
<p>Velociraptor has several valuable artifacts for hunting WMI Event
Consumers:</p>
<ul>
<li><code class="language-plaintext highlighter-rouge">Windows.Sysinternals.Autoruns</code> - leverages a thirdparty deployment of
Sysinternals Autoruns and typically my go to ASEP collection artifact but
limited by visibility in root/default and root/subscription only.</li>
<li>
<p><code class="language-plaintext highlighter-rouge">Windows.Persistence.PermanentWMIEvents</code> - recently upgraded to query
all ROOT namespaces.</p>
</li>
<li>This artifact reports currently deployed permanent WMI Event Consumers.</li>
<li>The artifact collects Binding information, then presents associated Filters and Consumers.</li>
<li>Target a specific namespace, or tick <code class="language-plaintext highlighter-rouge">AllRootNamespaces</code> to collect all
root namespace event consumers.</li>
</ul>
<p><img src="/static/img/article_images/2022-01-12-wmi-eventing/05collection.png" alt="Windows.Persistence.PermanentWMIEvents: configuration options" /></p>
<p><img src="/static/img/article_images/2022-01-12-wmi-eventing/05collection_results.png" alt="Windows.Persistence.PermanentWMIEvents: results" /></p>
<h4 id="telemetry">Telemetry</h4>
<p>Unfortunately prior to Windows 10 WMI logging was fairly limited. Sysmon and
other telemetry sources often rely on WMI eventing itself to collect WMI
eventing telemetry events. That means custom classes require namespace and
class existence prior to telemetry subscription. Sysmon as seen below also
does not have coverage for root/default namespace.</p>
<p><img src="/static/img/article_images/2022-01-12-wmi-eventing/03SysmonEid20.png" alt="Sysmon collection: Event ID 20 mapping (`__EventConsumer`)" /></p>
<p>The good news is since Windows 10, WMI logging has improved significantly
and we can now query the event log: Microsoft-Windows-WMI-Activity or
subscribe the underlying ETW provider of the same name. In the VQL below
I filter the ETW event on event consumer creation or delete operations.</p>
<pre><code class="language-vql">SELECT
System.TimeStamp AS EventTime,
System.ID as EventId,
strip(prefix='\\\\\.\\',string=EventData.NamespaceName) as NamespaceName,
EventData.Operation as Operation,
GetProcessInfo(TargetPid=int(int=EventData.ClientProcessId))[0] as Process
FROM watch_etw(guid="{1418ef04-b0b4-4623-bf7e-d74ab47bbdaa}")
WHERE EventId = 11
AND Operation =~ 'WbemServices::(PutInstance|DeleteInstance|PutClass|DeleteClass)'
AND Operation =~ 'EventConsumer|EventFilter|FilterToConsumerBinding'
</code></pre>
<p>I have included a completed artifact in the artifact exchange:
<a href="https://docs.velociraptor.app/exchange/artifacts/pages/wmieventing/">Windows.ETW.WMIEventing</a>.
That artifact includes process enrichment, targeting both creation and deletion of EventConsumers.</p>
<p><img src="/static/img/article_images/2022-01-12-wmi-eventing/06ETW.png" alt="Custom namespace provider registration and process enrichment" /></p>
<p><img src="/static/img/article_images/2022-01-12-wmi-eventing/06ETWb.png" alt="Windows.ETW.WMIEventing: all operations event consumer creation and removal" /></p>
<h4 id="event-log">Event Log</h4>
<p>Similar filters can be used with <code class="language-plaintext highlighter-rouge">Windows.EventLogs.EvtxHunter</code> for
detection. Its worthy to note, event logs hold less verbose logging for
the registration than ETW but this use case is helpful when coming late
to the party during an investigation.</p>
<p><img src="/static/img/article_images/2022-01-12-wmi-eventing/07EvtxHunter.png" alt="Windows.EventLogs.EvtxHunter: hunt for event consumer string" /></p>
<p><img src="/static/img/article_images/2022-01-12-wmi-eventing/07EvtxHunterb.png" alt="Windows.EventLogs.EvtxHunter: detect event consumer class creation" /></p>
<h1 id="conclusions">Conclusions</h1>
<p>During this post, we have shown three techniques for detecting WMI event consumers
that are worth considering. We can collect these data-points over an entire
network in minutes using Velociraptor’s “hunt” capability. Similarly
Velociraptor notebook workflow assists excluding known good entries quickly as part of analysis.</p>
<p>The Velociraptor platform aims to provide visibility and access
to endpoint data. If you would like to try Velociraptor it is available on Github under an open source license.
As always, please file issues on the bug tracker or ask questions on our
mailing list [email protected]. You can also chat with
us directly on discord at https://www.velocidex.com/discord</p>
<h2 id="references">References</h2>
<ol>
<li><a href="https://docs.microsoft.com/en-us/windows/win32/wmisdk/about-wmi">Microsoft documentation, About WMI</a></li>
<li><a href="https://attack.mitre.org/techniques/T1546/003/">MITRE ATT&amp;CK T1546.003, Event Triggered Execution: Windows Management Instrumentation Event Subscription</a></li>
<li><a href="https://www.youtube.com/watch?v=R5IEyoFpZq0">Christensen.L and Graeber.M, Blackhat 2018 - Subverting Sysmon: Application of a Formalized Security Product Evasion Methodology</a></li>
<li><a href="https://github.com/jsecurity101/Windows-API-To-Sysmon-Events/">JSecurity101, Windows APIs To Sysmon-Events</a></li>
</ol>
</description>
<pubDate>Wed, 12 Jan 2022 00:00:00 +0000</pubDate>
<link>https://mgreen27.github.io/posts/2022/01/12/wmi-eventing.html</link>
<guid isPermaLink="true">https://mgreen27.github.io/posts/2022/01/12/wmi-eventing.html</guid>
<category>DFIR</category>
<category>WMI</category>
<category>Detection</category>
<category>VQL</category>
<category>ASEP</category>
<category>ETW</category>
<category>posts</category>
</item>
<item>
<title>Cobalt Strike Payload Discovery And Data Manipulation In VQL</title>
<description><p>Velociraptor’s ability for data manipulation is a core platform capability
that drives a lot of the great content we have available in terms of data
parsing for artifacts and live analysis. After a recent engagement with
less common encoded Cobalt Strike beacons, and finding sharable files on
VirusTotal, I thought it would be a good opportunity to walk through some
workflow around data manipulation with VQL for analysis. In this post I
will walk though some background, collection at scale, and finally talk
about processing target files to extract key indicators.</p>
<h2 id="background">Background</h2>
<p>The Microsoft Build Engine (MSBuild.exe) is a signed Windows binary that
can be used to load C# or Visual Basic code via an inline task project
file. Legitimately used in Windows software development, it can handle XML
formatted task files that define requirements for loading and building
Visual Studio configurations. Adversaries can abuse this mechanism for
execution as defence evasion and to bypass application whitelisting -
<a href="https://attack.mitre.org/techniques/T1127/001/">ATT&amp;CK T1127</a>.</p>
<p>In this particular engagement, the Rapid7 MDR/IR team responded to an
intrusion in which during lateral movement, the adversary dropped many
variants of an MSBuild inline task file to several machines and then
executed MSBuild via wmi to load an embedded Cobalt Strike beacon.
Detecting an in memory Cobalt Strike beacon is trivial for active threats
with our process based yara and carving content.</p>
<p>The problem in this case was: how do you discover, then decode these encoded
files on disk quickly to find any additional scope using Velociraptor?</p>
<h2 id="collection">Collection</h2>
<p>First task is discovery and collecting our files in scope from the network.
Typically this task may be slow to deploy or rely on cobbled together
capabilities from other teams. The Velociraptor hunt is an easy button for
this use case.</p>
<p><img src="/static/img/article_images/2021-11-21-cobalt/01_new_hunt.png" alt="Velociraptor GUI : hunt : add hunt" /></p>
<p>Velociraptor has several valuable artifacts for hunting over Windows file
systems with yara: <code class="language-plaintext highlighter-rouge">Windows.Detection.Yara.NTFS</code> and <code class="language-plaintext highlighter-rouge">Generic.Detection.Yara.Glob</code>
spring to mind readily. In this instance I am selecting Yara.NTFS. I have
leveraged this artifact in the field for hunting malware, searching logs or
any other capability where both metadata and content based discovery is desired.</p>
<ul>
<li>This artifact searches the MFT, returns a list of target files then runs Yara over the target list.</li>
<li>The artifact leverages <code class="language-plaintext highlighter-rouge">Windows.NTFS.MFT</code> so similar regex filters can be applied including Path, Size and date.</li>
<li>The artifact also has an option to search across all attached drives and upload any files with Yara hits.</li>
</ul>
<p>Some examples of path regex may include:</p>
<ul>
<li>Extension at a path: Windows/System32/.+\.dll$</li>
<li>More wildcards: Windows/.+/.+\.dll$</li>
<li>Specific file: Windows/System32/kernel32.dll$</li>
<li>
<table>
<tbody>
<tr>
<td>Multiple extentions: .(php</td>
<td>aspx</td>
<td>resx</td>
<td>asmx)$</td>
</tr>
</tbody>
</table>
</li>
</ul>
<p><img src="/static/img/article_images/2021-11-21-cobalt/02_find_artifact.png" alt="Select artifact : Windows.Detection.Yara.NTFS" /></p>
<p>The file filter: <code class="language-plaintext highlighter-rouge">Windows/Temp/[^/]*\.TMP$</code> will suffice in this case to target
our adversaries path for payloads before applying our yara rule. Typically when
running discovery like this, an analyst can also apply additional options like
file size or time stamp bounds for use at scale and optimal performance.
The yara rule deployed in this case was simply quick and dirty hex conversion of
text directly from the project file referencing the unique variable setup that
was common across acquired samples.</p>
<pre><code class="language-yara">rule MSBuild_buff {
meta:
description = "Detect unique variable setup MSBuild inline task project file"
author = "Matt Green - @mgreen27"
date = "2021-10-22"
strings:
// byte[] buff = new byte[]
$buff = { 62 79 74 65 5b 5d 20 62 75 66 66 20 3d 20 6e 65 77 20 62 79 74 65 5b 5d }
// byte[] key_code = new byte[]
$key_code = { 62 79 74 65 5b 5d 20 6b 65 79 5f 63 6f 64 65 20 3d 20 6e 65 77 20 62 79 74 65 5b 5d }
condition:
any of them
}
</code></pre>
<p><img src="/static/img/article_images/2021-11-21-cobalt/03_configure_artifact.png" alt="Windows.Detection.Yara.NTFS hunt configuration" /></p>
<p>After launching the hunt, results become available inside the hunt entry on the
Velociraptor server for download or additional analysis.</p>
<p><img src="/static/img/article_images/2021-11-21-cobalt/04_hunt_results.png" alt="Hunt results" /></p>
<h2 id="payload-decode">Payload decode</h2>
<p>The Cobalt Strike payload is a string with represented characters xor encoded
as a hex formatted buffer and key in embedded C Sharp code as seen below.</p>
<p><img src="/static/img/article_images/2021-11-21-cobalt/05_payload_b.png" alt="MSBuild inline task project file with CobaltStrike payload" /></p>
<h3 id="enumerate-collected-files-and-find-location-on-server">Enumerate collected files and find location on server</h3>
<p>So far we have only collected files that have suspicious content. Now we want
to post process the result and try to extract more information from the payload.</p>
<p>The Velociraptor notebook is a gui component that lets the user run VQL directly
on the server. In this case we are leveraging the notebook attached to our hunt
to post process results opposed to downloading the files and processing offline.</p>
<p>Our first step of decode is to examine all the files we collected in the hunt.
The first query enumerates all the individual collections in the hunt, while the
second query retrieves the files collected for each job.</p>
<pre><code class="language-vql">-- find flow ids for each client
LET hunt_flows = SELECT *, Flow.client_id as ClientId, Flow.session_id as FlowId
FROM hunt_flows(hunt_id='H.C6508PLOOPD2U')
-- extract uploaded files and path on server
Let targets = SELECT * FROM foreach(row=hunt_flows,
query={
SELECT
file_store(path=vfs_path) as SamplePath,
file_size as SampleSize
FROM uploads(client_id=ClientId,flow_id=FlowId)
})
SELECT * FROM targets
</code></pre>
<p><img src="/static/img/article_images/2021-11-21-cobalt/06_notebook_files.png" alt="Find the location of all files collected" /></p>
<h3 id="extract-encoded-payload-and-xor-key">Extract encoded payload and xor key</h3>
<p>For the second step, to extract target bytes we leverage the <code class="language-plaintext highlighter-rouge">parse_records_with_regex()</code>
plugin to extract the strings of interest (Data and Key) in our target files.
Note: the buffer_size argument allows VQL to examine a larger buffer than the
default size in order to capture the typically very large payloads in these build
files. We have also included a 200 character limitation on the data field initially
as this will improve performance when working on VQL. We have also specified buffer
size to be larger than default and just larger than the biggest payload in scope.</p>
<pre><code class="language-vql">-- regex to extract Data and Key fields
LET target_regex = 'buff = new byte\\[\\]\\s*{(?P&lt;Data&gt;[^\\n]*)};\\s+byte\\[\\]\\s+key_code = new byte\\[\\]\\s*{(?P&lt;Key&gt;[^\\n]*)};\\n'
SELECT * FROM foreach(row=targets,
query={
SELECT
basename(path=SamplePath) as Sample,
SampleSize,
Key, --obtained from regex
read_file(filename=Data,accessor='data',length=200) as DataExtract -- obtained by regex, only output 200 characters
FROM parse_records_with_regex(
file=SamplePath,buffer_size=15000000,
regex=target_regex)
})
</code></pre>
<p><code class="language-plaintext highlighter-rouge">parse_records_with_regex()</code> is a VQL plugin that parses a file with a set of regexp and yields matches as records. The file is read into a large buffer. Then each regular expression is applied to the buffer, and all matches are emitted as rows.</p>
<p>The regular expressions are specified in the Go syntax. They are expected to contain capture variables to name the matches extracted.</p>
<p>The aim of this plugin is to split the file into records which can be further parsed. For example, if the file consists of multiple records, this plugin can be used to extract each record, while <code class="language-plaintext highlighter-rouge">parse_string_with_regex()</code> can be used to further split each record into elements. This works better than trying to write a more complex regex which tries to capture a lot of details in one pass.</p>
<p><img src="/static/img/article_images/2021-11-21-cobalt/07_notebook_regex.png" alt="VQL: extract data and keys" /></p>
<h3 id="extract-normalisation">Extract normalisation</h3>
<p>The third step adds a custom function for hex normalisation and converts the inline
C Sharp style encoding to a standard hex encoded string which VQL can easily parse.
In this case, the local normalise function will ensure we have valid 2 character hex.
The <code class="language-plaintext highlighter-rouge">regex_replace()</code> will strip the leading ‘0x’ from the hex strings and prepare for
xor processing.</p>
<pre><code class="language-vql">-- regex to extract Data and Key fields
LET target_regex = 'buff = new byte\\[\\]\\s*{(?P&lt;Data&gt;[^\\n]*)};\\s+byte\\[\\]\\s+key_code = new byte\\[\\]\\s*{(?P&lt;Key&gt;[^\\n]*)};\\n'
-- normalise function to fix bad hex strings
LET normalise_hex(value) = regex_replace(source=value,re='0x(.)[,}]',replace='0x0\$1,')
SELECT * FROM foreach(row=targets,
query={
SELECT
basename(path=SamplePath) as Sample,
SampleSize,
regex_replace(re="0x|,", replace="", source=normalise_hex(value=Key)) as KeyNormalised,
regex_replace(re="0x|,", replace="", source=normalise_hex(value=Data)) as DataNormalised
FROM parse_records_with_regex(
file=SamplePath,buffer_size=15000000,
regex=target_regex)
})
</code></pre>
<p><img src="/static/img/article_images/2021-11-21-cobalt/08_notebook_normalise.png" alt="VQL: hex normalisation" /></p>
<h3 id="extract-to-bytes">Extract to bytes</h3>
<p>The fourth step converts hex to bytes and validates that the next stage is working. In the example VQL below
we pass the hex text to the <code class="language-plaintext highlighter-rouge">unhex()</code> function to produce raw bytes for our variables.</p>
<pre><code class="language-vql">SELECT * FROM foreach(row=targets,
query={
SELECT
basename(path=SamplePath) as Sample,
SampleSize,
unhex(string=regex_replace(re="0x|,", replace="", source=normalise_hex(value=Key))) as KeyBytes,
read_file(filename=
unhex(string=regex_replace(re="0x|,", replace="", source=normalise_hex(value=Data))),
accessor='data',length=200) as DataBytesExtracted
FROM parse_records_with_regex(
file=SamplePath,buffer_size=15000000,
regex=target_regex)
})
</code></pre>
<p><img src="/static/img/article_images/2021-11-21-cobalt/09_notebook_bytes.png" alt="VQL: extract bytes" /></p>
<h3 id="xor-decode">Xor decode</h3>
<p>VQL’s flexibility comes with its ability to reuse existing artifacts in different ways.
The fifth step is running Velociraptor’s <code class="language-plaintext highlighter-rouge">xor()</code> function and piping the output into our
the existing <code class="language-plaintext highlighter-rouge">Windows.Carving.CobaltStrike()</code> configuration decoder.</p>
<pre><code class="language-vql">-- extract bytes
LET bytes &lt;= SELECT * FROM foreach(row=targets,
query={
SELECT
SamplePath, basename(path=SamplePath) as Sample, SampleSize,
unhex(string=regex_replace(re="0x|,", replace="", source=normalise_hex(value=Key))) as KeyBytes,
read_file(filename=
unhex(string=regex_replace(re="0x|,", replace="", source=normalise_hex(value=Data))),
accessor='data') as DataBytes
FROM parse_records_with_regex(
file=SamplePath,buffer_size=15000000,
regex=target_regex)
})
-- pass bytes to cobalt strike parser and format key indicators im interested in
SELECT *, FROM foreach(row=bytes,query={
SELECT *,
basename(path=SamplePath) as Sample,SampleSize
FROM Artifact.Windows.Carving.CobaltStrike(TargetBytes=xor(key=KeyBytes,string=DataBytes))
})
</code></pre>
<p><img src="/static/img/article_images/2021-11-21-cobalt/10_notebook_parse.png" alt="VQL: parse config" /></p>
<p>Decoded Cobalt Strike configuration is clearly observed.</p>
<p><img src="/static/img/article_images/2021-11-21-cobalt/11_notebook_config_example.png" alt="Cobalt strike configuration example" /></p>
<p>The smallest file also includes a Cobalt Strike shellcode stager, which I have recently
added to the Velociraptor Cobalt Strike parser.</p>
<p><img src="/static/img/article_images/2021-11-21-cobalt/12_notebook_shellcode_example.png" alt="Cobalt strike shellcode example" /></p>
<h3 id="additional-analysis">Additional analysis</h3>
<p>Finally, we may have a desire to extract specific key indicators and compare across
samples. A simple data stack on key indicators of interest.</p>
<pre><code class="language-vql">-- pass bytes to cobalt strike parser and format key indicators im interested in
LET cobalt = SELECT *, FROM foreach(row=bytes,query={
SELECT
basename(path=SamplePath) as Sample,SampleSize,
Hash as DecodeHash,
Rule,Offset,Xor,DecodedConfig
FROM Artifact.Custom.Windows.Carving.CobaltStrike(TargetBytes=xor(key=KeyBytes,string=DataBytes))
})
-- quick data stack on a few things to show sample analysis
SELECT count() as Total,
if(condition= Xor=~'^0x(2e|69)$', then=DecodedConfig.BeaconType, else= 'Shellcode stager') as Type,
if(condition= Xor=~'^0x(2e|69)$', then=DecodedConfig.LicenseId, else= DecodedConfig.Licence) as License,
if(condition= Xor=~'^0x(2e|69)$', then=dict(SpawnTox86=DecodedConfig.SpawnTox86,SpawnTox64=DecodedConfig.SpawnTox64), else= 'N/A') as SpawnTo,
if(condition= Xor=~'^0x(2e|69)$', then=DecodedConfig.Port, else= 'N/A') as Port,
if(condition= Xor=~'^0x(2e|69)$', then=DecodedConfig.C2Server, else= DecodedConfig.Server) as Server
FROM cobalt
GROUP BY Type, Licence,SpawnTo,Port,Server
</code></pre>
<p><img src="/static/img/article_images/2021-11-21-cobalt/13_notebook_example.png" alt="VQL results: key indicators of interest" /></p>
<h2 id="conclusions">Conclusions</h2>
<p>In this post we showed discovery, then decode of encoded Cobalt Strike beacons on disk.
Velociraptor can read, manipulate and enrich data efficiently across a large network
without the overhead of needing to extract and process manually.</p>
<p>Whilst most traditional workflows concentrate on collection and offline analysis,
Velociraptor notebook also enables data manipulation and flexibility in analysis.
If you would like to try out these features in Velociraptor, It is available on
<a href="https://github.com/Velocidex/velociraptor">GitHub</a> under an open source license.
Please follow the project or ask questions on our mailing list
[email protected]. You can also chat with us directly on
<a href="https://www.velocidex.com/discord">discord</a>.</p>
</description>
<pubDate>Tue, 09 Nov 2021 00:00:00 +0000</pubDate>
<link>https://mgreen27.github.io/posts/2021/11/09/VQL.html</link>
<guid isPermaLink="true">https://mgreen27.github.io/posts/2021/11/09/VQL.html</guid>
<category>DFIR</category>
<category>detection</category>
<category>cobaltstrike</category>
<category>VQL</category>
<category>posts</category>
</item>
<item>
<title>Windows IPSEC for endpoint quarantine</title>
<description><div style="text-align: center; font-size:70%;"><img width="400" src="/static/img/article_images/2020-07-23-IPSEC/00quarantine.png" /></div>
<p><br /></p>
<p>This post is going to talk about using Windows IPSec for a quarantine use case. Im going to explain the background, how to configure a policy and some of the design decisions as I was initially looking at building an endpoint based containment capability for Velociraptor.</p>
<h3 id="background">Background</h3>
<p>As a consultant part of our workflow may be to contain a machine whilst we carry out an investigation. There are often complexities when carrying out cross team tasks so any capability that enables remote management typically saves time and resources. Most modern EDR has some kind of quarantine capability built in, however my current goto endpoint IR tool does not. Im looking for a scriptable, native tool based containment capability that can be deployed via Velociraptor.</p>
<p>IPSec has been included in every Microsoft Windows operating system since Windows 2000. Most practitioners believe IPSec as a purely VPN based technology, however the Windows implementation enables additional endpoint focused IP Security. In addition to encryption and authentication, IPSec uses the same engine as Windows Firewall so can be used for packet filtering. With these capabilities in mind, IPSec adds some nice options for teams looking to implement best practices in host based segmentation.</p>
<p>IPSec can be configured via Group Policy Object, Local Security Policy, Powershell, or Netsh in modern windows versions. This post will only focus on my use case of IPSec as a local policy deployment. Although Powershell is the goto tool for administration of Windows systems, its support is lacking for IPSec configuration prior to Windows 2012R2. For this reason, I decided to use the built in Netsh tool which has support for IPsec from Windows 7 through to the current iterations of Windows 10 / Server.</p>
<p>Even though this post is not covering all the IPSec use cases. I have included some links in my resources section for anyone interested in more information and best practice around centralised group policy based configuration.</p>
<h3 id="ipsec-policy-definitions">IPSec policy definitions</h3>
<p>First of all, we need to understand what makes up an IPSec policy.</p>
<p>Netsh IPSec can be deployed in 2 different modes - Dynamic and Static: <br />
<strong>Dynamic</strong> - Is applied to current state and is not a persistent configuration.<br />
<strong>Static</strong> - Is applied as a policy and is simply a container for one or more rules. When enabled the policy populates the dynamic configuration and persists across reboot. When deleted, all objects attached to the policy are removed.</p>
<p>One of my requirements was to enable policy removal with minimal changes to current configuration. Using netsh static IPSec policies, we have a simplified process that can be built, applied and removed cleanly.</p>
<p>To create a policy: <code class="language-plaintext highlighter-rouge">netsh ipsec static add policy name=&lt;string&gt; description=&lt;string&gt;</code><br />
To enable a policy:<code class="language-plaintext highlighter-rouge">netsh ipsec static set policy name=&lt;string&gt; assign=[y|n]</code><br />
To delete a policy: <code class="language-plaintext highlighter-rouge">netsh ipsec static delete policy name=&lt;string&gt;</code><br />
NOTE: when deleting a policy it is disabled and all policy objects are also deleted.</p>
<p><strong>Filter List</strong> - Is simply a named container for one or more filters.</p>
<p><strong>Filter</strong> - Filters determine when to activate IPSec Rules.</p>
<p>To create a filter:<br />
<code class="language-plaintext highlighter-rouge">netsh ipsec static add filter filterlist=&lt;string&gt;</code> <br />
<code class="language-plaintext highlighter-rouge">srcaddr=[me|any|&lt;dns&gt;|&lt;server&gt;|&lt;ipv4&gt;|&lt;ipv6&gt;|&lt;ipv4-ipv4&gt;|&lt;ipv6-ipv6&gt;]</code> - source address.<br />
<code class="language-plaintext highlighter-rouge">srcmask=[&lt;mask&gt;|&lt;prefix&gt;]</code> - source netmask, only needed if network IP specified. <br />
<code class="language-plaintext highlighter-rouge">srcport=[&lt;port&gt;]</code> - source port as integer. 0 for all.<br />
<code class="language-plaintext highlighter-rouge">dstaddr=[me|any|&lt;dns&gt;|&lt;server&gt;|&lt;ipv4&gt;|&lt;ipv6&gt;|&lt;ipv4-ipv4&gt;|&lt;ipv6-ipv6&gt;]</code> - destination.
<code class="language-plaintext highlighter-rouge">dstmask=[&lt;mask&gt;|&lt;prefix&gt;]</code> - destination netmask, only needed if network IP specified.<br />
<code class="language-plaintext highlighter-rouge">dstport=[&lt;port&gt;]</code> - destination port as integer. 0 for all.<br />
<code class="language-plaintext highlighter-rouge">protocol=[ANY|ICMP|TCP|UDP|RAW|&lt;integer&gt;]</code> - protocol as name or port. <br />
<code class="language-plaintext highlighter-rouge">mirrored=[&lt;yes&gt;|&lt;no&gt;]</code> - optional and defaults to yes as it enables reverse communication.
<code class="language-plaintext highlighter-rouge">description=[&lt;string&gt;]</code></p>
<p>For example: Allowing RDP traffic inbound to a machine from any IP<br />
(Example only - stay away from this rule in an IR) <br />
<code class="language-plaintext highlighter-rouge">netsh ipsec static add filter filterlist="Test Filter List"</code><br />
<code class="language-plaintext highlighter-rouge">srcaddr=me srcport=3389 dstaddr=any dstport=0 protocol=tcp</code> <br />
<code class="language-plaintext highlighter-rouge">description="quick and dirty RDP filter"</code></p>
<p><strong>Filter Action</strong> - Occurs when a Filter is satisfied. An IPSec filter can be permit, block, encrypt or sign the data stream. In my use case, I am only interested in permit and block as we are not interested in traffic encryption or validation usecases.</p>
<p>To create a filter action:<br />
<code class="language-plaintext highlighter-rouge">netsh ipsec static add filteraction name=&lt;string&gt; action=&lt;permit&gt;|&lt;block&gt;</code></p>
<p><strong>Rules</strong> - An IPSec rule requries a filter list and a filter action and connects them to a policy. An optional component of a rule is authentication, which is out of scope for my current implementation.</p>
<p>To create a rule:<br />
<code class="language-plaintext highlighter-rouge">netsh ipsec static add rule name=&lt;string&gt; policy=&lt;string&gt;</code><br />
<code class="language-plaintext highlighter-rouge">filterlist=&lt;string&gt; filteraction=&lt;string&gt; description=&lt;string&gt;</code></p>
<p><br /></p>
<h3 id="rolling-into-velociraptor">Rolling into Velociraptor</h3>
<p>The summary of the above commands translate into a defined process:</p>
<ol>
<li>Create policy.</li>
<li>Create filter lists.</li>
<li>Add filters to filter lists.</li>
<li>Create filter actions.</li>
<li>Create rules (link all together).</li>
<li>Apply policy.</li>
<li>Test it works.</li>
</ol>
<p>Velociraptor implementation of this process is transparent apart from a few select components. The goals being a repeatable capability that is reliable.</p>
<div style="text-align: center; font-size:70%;"><img src="/static/img/article_images/2020-07-23-IPSEC/01parameters.png" /><br />Quarantine: Parameter options</div>
<p><br /></p>
<p>Configurable items are:<br />
<strong>PolicyName</strong> - for auditing purposes</p>
<p><strong>RuleLookUpTable</strong><br />
This enables custom IPSec filters to be added to the permit or block rule configuration easily. Each field corresponds to a Netsh switch discussed above and the only requirements are action, source and destination addresses. All other items will simply add the entry to the relevant switch in netsh and bad commands will be observed in results.<br />
<br /></p>
<div style="text-align: center; font-size:70%;"><img src="/static/img/article_images/2020-07-23-IPSEC/02log.png" /><br />Artifact log: executed netsh commands.</div>
<p><br /></p>
<p>The commands in my screenshots resulted from adding to the artifact defaults:</p>
<div style="text-align: center; font-size:70%;"><img src="/static/img/article_images/2020-07-23-IPSEC/02error.png" /><br />Custom filters: RDP and force error</div>
<p><br /></p>
<div style="text-align: center; font-size:70%;"><img width="700" src="/static/img/article_images/2020-07-23-IPSEC/02results.png" /><br />Artifact results: see netsh stderr on incorrect entry.</div>
<p><br /></p>
<p><strong>MessageBox</strong> - if configured will show a messagebox to all logged in users. There is a limitation of 256 Characters that will be trucated if exceeded.</p>
<div style="text-align: center; font-size:70%;"><img width="400" src="/static/img/article_images/2020-07-23-IPSEC/02messagebox.png" /><br />Example messagebox</div>
<p><br /></p>
<p><strong>RemovePolicy</strong> - will simply run the remove policy command for configured policy name.<br />
<br /></p>
<h3 id="caveats">Caveats</h3>
<p>There are a couple of considerations when deploying local IPSec policy.</p>
<p>First being, it is dangerous to apply local policy and there is a real risk of locking yourself out of access to the machine. DNS resolutions can change, DHCP leases expire or the block all approach may accidentally block an unintended resource. Understanding the network and entering appropriate exclusions to mitigate these issues are important. In addition to exclusions, it is reccomended to test content prior to live fire.</p>
<p>To simplify this process, I have implemented a capability to extract the agent config and add the Velociraptor server configuration automatically to exclusions. After policy deployment, the machine will attempt communication back to the Velociraptor server and if it fails, roll back the quarantine policy. Similarly all DNS and DHCP traffic is allowed by default in user customisable configuration.</p>
<p>The final caveat is local IPSec policy can not be applied if a domain level IPSec policy is applied. In this case the reccomendation is to add a seperate quarantine rule via Active Directory.</p>
<h3 id="final-thoughts">Final Thoughts</h3>
<p>In this post I have walked through local IPSec policy to implement machine quarantine in the Velociraptor platform. Despite limitations, this feature has been useful for me to call on as needed. Testing and the age old “understanding your tools” is very important.</p>
<p>I already have several optimisations planned - feel free to send through any other thoughts, feedback and optimisations.</p>
<p>Content can be found - <a href="https://github.com/Velocidex/velociraptor/blob/master/artifacts/definitions/Windows/Remediation/Quarantine.yaml">Windows.Remediation.Quarantine</a></p>
<p><br /></p>
<h1 id="further-resources">Further resources</h1>
<ol>
<li>
<p><a href="https://docs.microsoft.com/en-us/windows-server/networking/technologies/netsh/netsh">Microsoft Docs, Network Shell (Netsh).</a></p>
</li>
<li>
<p><a href="https://docs.microsoft.com/en-us/powershell/module/netsecurity/new-netipsecrule?view=win10-ps">Microsoft Docs, New-NetIPsecRule.</a></p>
</li>
<li>
<p><a href="https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc754274(v=ws.11)?redirectedfrom=MSDN">Microsoft Docs, Windows Firewall with Advanced Security.</a></p>
</li>
<li>
<p><a href="https://channel9.msdn.com/Events/Ignite/New-Zealand-2016/M377">Payne, Jessica. Demystifying the Windows Firewall, Ignite 2016</a></p>
</li>
<li>
<p><a href="https://blog.dane.io/2018/04/22/endpoint-isolation-with-the-windows-firewall.html">Stuckey, Dane. Endpoint Isolation with the Windows Firewall, 2018</a></p>
</li>
</ol>
<p><br /><br /></p>
</description>
<pubDate>Thu, 23 Jul 2020 00:00:00 +0000</pubDate>
<link>https://mgreen27.github.io/posts/2020/07/23/IPSEC.html</link>
<guid isPermaLink="true">https://mgreen27.github.io/posts/2020/07/23/IPSEC.html</guid>
<category>DFIR</category>
<category>Velociraptor</category>
<category>VQL</category>
<category>NetSh</category>
<category>IPSec</category>
<category>posts</category>
</item>
<item>
<title>Local Live Response with Velociraptor ++</title>
<description><div style="text-align: center; font-size:70%;"><img width="200" src="/static/img/article_images/2019-12-08-LocalLRwithVRaptor/00title.png" /></div>
<p>In this post im going to talk about a live response use case leveraging the Velociraptor project worth sharing. Specifically, live response with ancillary collection by third party tools embedded to minimise user impact. As usual, im going to provide some background and walk through the steps then share the code.</p>
<p>EDIT: Please use this post for education only. Although the content and themes of this post are valid, the examples included have been superseeded by a GUI based local collector builder from the Velociraptor server.
<br /></p>
<h4 id="background">Background</h4>
<p>Live response collection is one of the most critical stages of modern incident response. A quick targeted collection of important artefacts means timely answers and more efficient results. Although I prefer a remote agent keeping the human element out of collection as much as possible, a common use case I encounter is needing to run a local collection from a USB or network share. Typically this means providing a script of some sort with a binaries folder and collection protocol, sometimes to less technical users with a margin for error.</p>
<p>Mike at Velocidex has posted recently about triage collection (local live response) with Velociraptor:</p>
<ul>
<li><a href="https://medium.com/velociraptor-ir/triage-with-velociraptor-pt-1-253f57ce96c0">Triage with Velociraptor — Pt 1</a></li>
<li><a href="https://medium.com/velociraptor-ir/triage-with-velociraptor-pt-2-d0f79066ca0e">Triage with Velociraptor — Pt 2</a></li>
<li><a href="https://medium.com/velociraptor-ir/triage-with-velociraptor-pt-3-d6f63215f579">Triage with Velociraptor — Pt 3</a></li>
</ul>
<p>One undocumented feature is Velociraptor’s ability to append additional tools to the end of the binary and enable execution. This capability opens up some really nice use cases for ancillary data collection during a local Velociraptor triage. Im going to cover creating a Velicraptor local live binary with WinPMem for memory and Autoruns for autostart extensibility point (ASEP) collection.<br />
<br /></p>
<h4 id="what-do-i-need">What do I need?</h4>
<p>I will be using the current Velociraptor release and building on a linux platform. Im looking at building both a x64 and x86 Windows version, so I want to download the relevant Velociraptor binaries to my staging folder.</p>
<div style="text-align: center; font-size:70%;"><img src="/static/img/article_images/2019-12-08-LocalLRwithVRaptor/01Latest.png" /><br />Download Velociraptor binaries</div>
<p>We will also download both x86 and x64 third party binaries supporting my use cases. In this instance Autoruns and WinPMem, which I then add to the relevant “bitness” payload zip files.</p>
<div style="text-align: center; font-size:70%;"><img width="400" src="/static/img/article_images/2019-12-08-LocalLRwithVRaptor/01Other.png" /><br />payload.zip: x64 binaries, payload_x86.zip: x86 binaries</div>
<p><br /></p>
<h4 id="velociraptor-configuration">Velociraptor configuration</h4>
<p>Setting up for local live response requires setting up an autoexecution object and output configuration. In my case, I setup artifact called “MultiCollection” with a zipfile output “collection_HOSTNAME.zip”. As there is no folder path specified, the zip will end up in the “start in folder”.</p>
<p>Once the structure of VQL is understood it is trivial to add in additional use cases. Under the parameters section, I also have included an “uploadTable” parameter to add additional direct file downloads not covered by other components. In this case, im adding pagefile, swapfile and hybernation files if they exist as default. This table is helpful for quick collection and can also be represented in a glob style search.</p>
<div style="text-align: center; font-size:70%;"><img width="500" src="/static/img/article_images/2019-12-08-LocalLRwithVRaptor/02Config.png" /><br />Autoexecution VQL object</div>
<p>Next component is the “sources” section which outlines the VQL queries to run. In my screenshot below, supporting order of volatility, I am running memory collection first then supporting file uploads. Worthy to note: my VQL does not “upload” to the output zip file, instead I have decided to output to “HOSTNAME.aff4” to the same folder as the binary to optimise resouce use and remove the need to push the aff4 to a temporary location prior to adding to the zip.</p>
<div style="text-align: center; font-size:70%;"><img width="550" src="/static/img/article_images/2019-12-08-LocalLRwithVRaptor/02Config2.png" /><br />Memory acquisition</div>
<p>Velociraptor allows modular use of the collection profiles from Eric Zimmerman’s KapeFiles project. I have chosen KapeFiles.Targets _BasicCollection and some supporting items is my next VQL sources. I have also included a version of <a href="https://gist.github.com/mgreen27/22cd70739e733647e1e23338ca35c9a9#file-local_all-yaml">all currently available switches</a> (at time of writing), to use as a template and remove unwanted items prior to build.</p>
<div style="text-align: center; font-size:70%;"><img width="550" src="/static/img/article_images/2019-12-08-LocalLRwithVRaptor/02Config3.png" /><br />KapeFiles acquisition</div>
<p>Finally, I am collecting an Autoruns output for autostart extensibility point (ASEP) collection. In my VQL I have specifically used wildcards to cover both x86 and x64 binaries and enable use of the same configuration across bitness. I am also using the same trick as my WinPMem execution and output to the binary root folder as “HOSTNAME_autoruns.csv”</p>
<div style="text-align: center; font-size:70%;"><img width="500" src="/static/img/article_images/2019-12-08-LocalLRwithVRaptor/02Config4.png" /><br />Autoruns aquisition</div>
<p><br /></p>
<h4 id="how-do-i-build-it">How do I build it?</h4>
<p>To build we run velociraptor in “repack” mode. That is specifying: the input binary, relevant payload zip, configuration file and output binary.</p>
<div style="text-align: center; font-size:70%;"><img src="/static/img/article_images/2019-12-08-LocalLRwithVRaptor/03Build.png" /><br />Velociraptor repack</div>
<p>One thing to note, is that using this technique the created binary will not contain a valid certificate as the binary is modified with the “repack” command. This condition occurs through any of the Velociraptor customisations and typically is not a problem during live response.</p>
<p><br /></p>
<h4 id="how-do-i-run-it">How do I run it?</h4>
<p>Copy the relevant binaries to your collection USB, folder or share and execute with administrator privilege.</p>
<div style="text-align: center; font-size:70%;"><img src="/static/img/article_images/2019-12-08-LocalLRwithVRaptor/04Run.png" /><br />...SNIP...</div>
<div style="text-align: center; font-size:70%;"><img src="/static/img/article_images/2019-12-08-LocalLRwithVRaptor/04Run2.png" /><br />Local live response execution</div>
<p>Output will be to the binary folder.</p>
<div style="text-align: center; font-size:70%;"><img width="500" src="/static/img/article_images/2019-12-08-LocalLRwithVRaptor/04Run3.png" /><br />Live response output</div>
<p>Opening collection_HOSTNAME.zip we can see all files that were configured for collection / upload.</p>
<div style="text-align: center; font-size:70%;"><img width="500" src="/static/img/article_images/2019-12-08-LocalLRwithVRaptor/04Run4.png" /><br />collection zip contents</div>
<p><br /></p>
<h4 id="final-thoughts">Final Thoughts</h4>
<p>In this post I have walked through using Velociraptor to wrap third party binaries into an easy to use local live response tool. Velociraptor’s modular architecture enables rolling in and out capabilities fast for a simple end user experience.</p>
<p>For those that are interested I have included below:</p>
<ol>
<li><a href="https://gist.github.com/mgreen27/22cd70739e733647e1e23338ca35c9a9#file-buildlocallr-sh">A build script for building x86 and x64 versions of my local live response tool</a></li>
<li><a href="https://gist.github.com/mgreen27/22cd70739e733647e1e23338ca35c9a9#file-local_all-yaml">A configuration file with ALL KapeFiles switches</a></li>
<li><a href="https://gist.github.com/mgreen27/22cd70739e733647e1e23338ca35c9a9#file-local-yaml">The reduced configuration from my example</a></li>
</ol>
<p>I hope you have gained some knowledge on Velociraptor for local live response. Please feel free to reach out and provide feedback or improvements.<br />
<br /></p>
<h4 id="further-resources">Further resources</h4>
<ol>
<li><a href="https://www.velocidex.com/about/">Velociraptor Documentation</a></li>
<li><a href="https://medium.com/velociraptor-ir/triage-with-velociraptor-pt-1-253f57ce96c0">Triage with Velociraptor — Pt 1</a></li>
<li><a href="https://medium.com/velociraptor-ir/triage-with-velociraptor-pt-2-d0f79066ca0e">Triage with Velociraptor — Pt 2</a></li>
<li><a href="https://medium.com/velociraptor-ir/triage-with-velociraptor-pt-3-d6f63215f579">Triage with Velociraptor — Pt 3</a></li>
</ol>
<p><br /><br /></p>
</description>
<pubDate>Sun, 08 Dec 2019 00:00:00 +0000</pubDate>
<link>https://mgreen27.github.io/posts/2019/12/08/LocalLRwithVRaptor.html</link>
<guid isPermaLink="true">https://mgreen27.github.io/posts/2019/12/08/LocalLRwithVRaptor.html</guid>
<category>DFIR</category>
<category>Velociraptor</category>
<category>VQL</category>
<category>posts</category>
</item>
<item>
<title>Live response automation with Velociraptor</title>
<description><div style="text-align: center; font-size:70%;"><img width="400" src="/static/img/article_images/2019-11-10-LRwithVRaptor/00title.png" /></div>
<p><br /></p>
<p>This post is going to talk about the Velociraptor project. Specifically, live response and automation I have built for my own engagements. Im going to provide some background and walk through a proof of concept, then share the code.</p>
<p>EDIT: Please use this post for historical education only. Although the content and themes of this post are valid, the examples included are no longer valid for the current Velociraptor version. For current API configuration, please refer to the following links or feel free to contact me directly.</p>
<ul>
<li><a href="https://www.velocidex.com/discord">Chat with us on Discord</a></li>
<li><a href="https://www.velocidex.com/docs/user-interface/api/">Documentation</a></li>
<li><a href="https://www.velocidex.com/blog/medium/2020-03-06-velociraptor-post-processing-with-jupyter-notebook-and-pandas-8a344d05ee8c/">Blog Post</a></li>
</ul>
<h3 id="background">Background</h3>
<p>Velociraptor is an endpoint collection tool developed by Michel Cohen at Velocidex. Mike was the lead developer on many open source tools we know pretty well in our industry: for example Rekall/WinPMem, AFF4 and GRR. Velociraptor was created to simplify the GRR architecture and some of the complexity poblems of clunky back end and bloated data models. The result is a robust query language (VQL) and open source collection framework that is the building blocks of greatness.</p>
<p>The ability to collect and process data efficiently as part of live response workflow is critial for timely incident response. This is all made possible by Velociraptor, and its open ended API enables interoperability with other tools, speeding up this process.</p>
<p>Basic setup of Velociraptor is out of scope for this post. I am running Velociraptor on hardened linux platform and plan to walk through setting up a live response processing service. For setup background, I have added a lot of great resources in the references section below. Although not required, this post assumes some familiarity with Velociraptor and it is reccomended to review some of the references if not familiar with the platform.</p>
<h3 id="api-basics">API Basics</h3>
<p>The Velociraptor API is fairly simple architecture and enables VQL queries with an output of familiar VQL result rows. The power to this approach is those rows can then be enriched and processed to enable completx workflows. It can be invoked both locally or over the network, providing the building blocks we desire in mature incident response.</p>
<div style="text-align: center; font-size:70%;"><img src="/static/img/article_images/2019-11-10-LRwithVRaptor/01APIServices.png" /><br />Velociraptor Services Architecture</div>
<p>The modularity means post processing work is not part of the Velociraptor front end. We are able to essentially watch an event queue, then execute our API based use cases as desired. Performance can be optimised as with an accessable file system, intensive tasks like Live Response processing can be run on dedicated servers.</p>
<h3 id="api-setup">API Setup</h3>
<p><a href="https://github.com/Velocidex/velociraptor/tree/master/bindings/python">Python bindings</a> are included in the project a long with a <a href="https://github.com/Velocidex/velociraptor/blob/master/bindings/python/client_example.py">working client example</a>. The velocidex team also have a great amount of API connection information on the documentation page. This ensures connection and content development are simple and we can focus on the content.</p>
<div style="text-align: center; font-size:70%;"><img src="/static/img/article_images/2019-11-10-LRwithVRaptor/02APIinstall.png" /><br />Velociraptor Python bindings install commands</div>
<p>An API configuration file is also required for authentication and key materials are generated similar to other Velociraptor configuration items.<br />
<em>velociraptor –config server.config.yaml config api_client –name [APIName] &gt; api_client.yaml</em></p>
<p>api_client.yaml:<br />
<em>&lt;SNIP Certificate information&gt;</em><br />
<em>api_connection_string: 127.0.0.1:8001</em><br />
<em>name: [APIName]</em></p>
<p>Note: default server.config.yaml configures the API service to bind to all interfaces and listen on port 8001. Please ensure relevant bindings and ports availible.</p>
<p>The example client script contains a great example of setting up API connection and a query stub. I have chosen to modify the script and add some global variables to simplify execution.</p>
<div style="text-align: center; font-size:70%;"><img width="450" src="/static/img/article_images/2019-11-10-LRwithVRaptor/03APIQuery.png" /><br />Example API python global variables</div>
<p>CONFIG is my generated client API configuration path. I have chosen the default velociraptor config path but this can be any location.</p>
<p>CASES is my output folder path. This can be an ingestion path or distributed storage to plug processed data into additional workflow.</p>
<p>QUERY is my VQL I plan to query through the API. The query monitors the Velociraptor server for completed flow events; i.e <em>watch_monitoring(artifact=’System.Flow.Completion’)</em>. A WHERE clause extracts Flows containing artefacts with results and names containing “KapeFiles” or “LiveResponse”.</p>
<p>What makes VQL so powerful is we can enrich with additional VQL or formatting. In my example, the SELECT statement extracts relevant fields pertaining to a completed flow for my processing use cases. This includes a list of uploaded files, their path and other flow metadata.</p>
<h3 id="api-processing">API Processing</h3>
<p>Now we have collected data points requried for processing, its as simple as running our normal processing logic applied to each row of results.</p>
<div style="text-align: center; font-size:70%;"><img width="400" src="/static/img/article_images/2019-11-10-LRwithVRaptor/04Process.png" /><br />Extraction and printing of Flow results</div>
<p><br /></p>
<div style="text-align: center; font-size:70%;"><img width="300" src="/static/img/article_images/2019-11-10-LRwithVRaptor/04ProcessStdOut.png" /><br />StdOut: Flow results</div>
<p>After setting up relevant variables for processing, we can then shuttle off to tasks. Below is my plaso based timeliner function for a quick and dirty timeline.</p>
<div style="text-align: center; font-size:70%;"><img width="400" src="/static/img/article_images/2019-11-10-LRwithVRaptor/05TimelinerFlow.png" /><br />Calling timeliner</div>
<p><br /></p>
<div style="text-align: center; font-size:70%;"><img width="500" src="/static/img/article_images/2019-11-10-LRwithVRaptor/05Timeliner.png" /><br />Timeliner: plaso based timeline function</div>
<p>The function sets up relevant paths for the command, creates target folder and shells out to the relevant plaso script. Modification is simple and the results can be collected manually or by data platform agent of choice.</p>
<p>Similarly, file specific processing based on upload_paths enables traversing the flow upload paths once for optimal performance. I have also included a test and will only process some paths if the artifact of interest was collected.</p>
<div style="text-align: center; font-size:70%;"><img width="450" src="/static/img/article_images/2019-11-10-LRwithVRaptor/05ProcessingPathBased.png" /><br />Example path specific processing flow</div>
<h3 id="so-what-do-we-collect">So what do we collect?</h3>
<p>The Velociraptor project has built in artefacts that are able to be customised easily. In the early days of Velociraptor I had written custom ntfs collection artifacts to accommodate my collection use cases. The velocidex team have recently developed an artefact that makes this process much easier. The artefact is called Windows.KapeFiles.Targets and extracts the collection profiles from Eric Zimmerman’s KapeFiles project.</p>
<div style="text-align: center; font-size:70%;"><img src="/static/img/article_images/2019-11-10-LRwithVRaptor/06KapeTargets.png" /><br />Artifact: KapeTargets</div>
<p><br /></p>
<p>From a user perspective this is very easy with preset levels of live response enabled or individual targetted artefact collection. Of course I still have my own live response preferences based on use case, but Kape files is a fairly mature and modular collection capability.</p>
<h3 id="how-do-i-run-it">How do I run it?</h3>
<p>To run simply call the client script inside the same folder as the bindings.<br />
For example <br />
<em>/usr/bin/python3 $VRAPTOR/api/processing.py.</em></p>
<p>In my usecase I prefer an on demand Velociraptor processing service with the following attributes:</p>
<div style="text-align: center; font-size:70%;"><img width="450" src="/static/img/article_images/2019-11-10-LRwithVRaptor/07Service.png" /><br />Velociraptor Processing Service</div>
<p><br /></p>
<p>Set to on demand, I simply execute service startup with:<br />
<em>sudo systemctl start vraptor-processing</em></p>
<p>Stop with:<br />
<em>sudo systemctl stop vraptor-processing</em></p>
<p>And view status with:<br />
<em>sudo systemctl status vraptor-processing -l</em></p>
<p>Once running, the service will wait for relevant rows to be returned and process as configured.</p>
<h3 id="final-thoughts">Final Thoughts</h3>
<p>In this post I have walked through using the Velociraptor API for live response processing. Velociraptor is modular providing access to underlying services and enabling blue teams to build the workflow that they need, on the infastructure that works for them. In my instance the example covers a small subset of what I plan to deply but is already saving on some really time consuming tasks.</p>
<p>For those that are interested I have included below:</p>
<ol>
<li><a href="other/Velociraptor/VRaptorAPISetup.sh">An install script for the API bindings and service install</a></li>
<li><a href="other/Velociraptor/processing.py">A POC processsing script</a></li>
</ol>
<p>I hope you have gained some knowledge on Velociraptor API setup and one of the most important use cases for incident response. Please feel free to reach out and provide feedback or improvements.</p>
<p><br /></p>
<h1 id="further-resources">Further resources</h1>
<ol>
<li>
<p><a href="https://www.velocidex.com/about/">Velociraptor Documentation</a></p>
</li>
<li>
<p><a href="https://www.velocidex.com/docs/presentations/sans_dfir_summit2019/">Velociraptor Overview at 2019 SANs DFIR Summit</a></p>
</li>
<li>
<p><a href="https://www.velocidex.com/docs/getting-started/">Velociraptor Getting started</a></p>
</li>
<li>
<p><a href="https://www.velocidex.com/docs/user-interface/api/">Velociraptor API documentation</a></p>
</li>
<li>
<p><a href="https://github.com/Velocidex/velociraptor/tree/master/bindings/python">Velociraptor Python Bindings</a></p>
</li>
</ol>
<p><br /><br /></p>
</description>
<pubDate>Sun, 10 Nov 2019 00:00:00 +0000</pubDate>
<link>https://mgreen27.github.io/posts/2019/11/10/LRwithVRaptor.html</link>
<guid isPermaLink="true">https://mgreen27.github.io/posts/2019/11/10/LRwithVRaptor.html</guid>
<category>DFIR</category>
<category>Velociraptor</category>
<category>VQL</category>
<category>posts</category>
</item>
<item>
<title>O365: Hidden InboxRules</title>
<description><div style="text-align: center; font-size:70%;"><img width="300" src="/static/img/article_images/2019-06-09-O365HiddenRules/00title.png" /></div>
<p><br /></p>
<p>In this post Im going to talk about Office365 hidden inbox rules. Im going to give some background, show rule modification, and talk about detection methodology.</p>
<h1 id="background">Background</h1>
<p>Attacks against Office 365 have generated a fair amount of industry acknowledgement in recent times as more and more organisations have moved towards cloud based services. Misconfiguration combined with less than optimal threat awareness means even the most simple attacks can provide access to this crucial service.</p>
<p>Inbox rules are typically part of evil methodology and can be abused across the attack lifecycle:</p>
<ul>
<li>Defence Evasion</li>
<li>Reconnaissance</li>
<li>Persistence</li>
<li>Data collection / Exfiltration</li>
</ul>
<p>Typically inbox rules are simple to detect statically via GUI access or in bulk from the Exchange Management Shell (EMS).</p>
<div style="text-align: center; font-size:70%;"><img width="600" src="/static/img/article_images/2019-06-09-O365HiddenRules/01rule.png" /><br />O365 OWA: Inbox rule https://outlook.office.com/mail/options/mail/rules</div>
<p><br /></p>
<div style="text-align: center; font-size:70%;"><img width="500" src="/static/img/article_images/2019-06-09-O365HiddenRules/01rule2.png" /><br />O365 EMS: Typical Powershell detection.</div>
<p><br /></p>
<h1 id="hidden-rules">Hidden Rules</h1>
<p>Minimally documented, Damian Pfammatter at Compass Security explained the methodology in his September 2018 <a href="https://blog.compass-security.com/2018/09/hidden-inbox-rules-in-microsoft-exchange/">blog post</a>. In summary, inbox rules can be hidden by leveraging an API called Messaging Application Programming Interface (MAPI), which provides low level access to exchange data stores.</p>
<p>Below I am accessing the inbox rule manually via the <a href="https://github.com/stephenegriffin/mfcmapi">MFCMAPI tool</a> from a machine with an Outlook profile configured to our in scope mailbox. IPM.Rule.Version2.Message objects indicate an inbox rule.</p>
<div style="text-align: center; font-size:70%;"><img width="600" src="/static/img/article_images/2019-06-09-O365HiddenRules/02mapi.png" /><br />EvilMove inbox rule: prior to change</div>
<p><br /></p>
<p>Modification is simply adding an unsupported value to the PR_RULE_MSG_PROVIDER field (or blanking out).</p>
<div style="text-align: center; font-size:70%;"><img width="600" src="/static/img/article_images/2019-06-09-O365HiddenRules/02mapi2.png" /><br />EvilMove inbox rule hidden: fake provider details.</div>
<p><br /></p>
<p>Once modified, the inbox rule is hidden and completely operational:</p>
<div style="text-align: center; font-size:70%;"><img width="600" src="/static/img/article_images/2019-06-09-O365HiddenRules/02mapi4.png" /><br />InboxRule hidden: no view in WebUI, InboxRule works as expected.</div>
<p><br /></p>
<div style="text-align: center; font-size:70%;"><img width="500" src="/static/img/article_images/2019-06-09-O365HiddenRules/02mapi5.png" /><br />InboxRule hidden: EMS results.</div>
<p><br /></p>
<h1 id="detection">Detection</h1>
<p>At scale detection of hidden inbox rules comes down to two main areas.</p>
<h4 id="1-mapi-based---point-in-time">1. MAPI based - point in time.</h4>
<p>Microsoft have released a script for use over Exchange Web Services (EWS) - Get-AllTenantRulesAndForms that enables tenant wide collection of Exchange Rules and Forms querying the low level data stores. This script enables visibility of Hidden Rules but leaves out an essential PR_RULE_MSG_PROVIDER field for detection. A modified version from Glen Scales collecting the PR_RULE_MSG_PROVIDER field is available <a href="https://github.com/gscales/O365-InvestigationTooling/blob/master/Get-AllTenantRulesAndForms.ps1">here - Get-AllTenantRulesAndForms</a> (screenshot below).</p>
<ul>
<li>Frequency analysis on RuleMsgProvider field is recommended as a starting point for detection.</li>
<li>Alert and investigate any inbox rules with blank or unusual RuleMsgProvider fields.</li>
<li>Alert and investigate IsPotentiallyMalicious = True - i.e rule action is an executable object.</li>
<li>Limitations are high privilege requirements - Global Admin role AND EWS ApplicationImpersonation.</li>
</ul>
<div style="text-align: center; font-size:70%;"><img height="200" src="/static/img/article_images/2019-06-09-O365HiddenRules/03Detection.png" /><br />Exchange Web Services (EWS): Empty RuleName and RuleMsgProvider fields.</div>
<p><br /></p>
<p>The action, condition and command fields (if populated) are base64 encoded raw byte arrays. I have yet to find documentation on the format for decoding or reverse engineer the data but there are some identifiable strings that can provide insights into the rule.</p>