TSYS Systems

This article covers the (high level) systems architecture that supports TSYS/Redwood Group. Other articles will go more in depth on specific systems. This article provides a general overview.

The architecture was designed to :

  • meet the highest levels of information assurance and reliability (at a single site)
  • support (up to) Top Secret workloads for R&D (SBIR/OTA) (non production) contract work (by US Citizens only) being done for the United States Department of Defense/Energy/State by various components of TSYS Group.

Virtual Machines: Redundant (mix of active/passive active/active)

We are (with exception of R&D product development (being a hardware/IOT product) 99.9%) virtualized :

Exceptions to virtualized infrastructure:

  • raspberry pi providing stratum0 (via hat) and server room badge reader functionality (via usb badge reader and lock relay)
  • intermediate CA HSM passed through to a VM on vm3
  • UPS units connected to vm3 via usb/serial

Any further exceptions to virtual infra require CEO/board approval and extensive justification.

Networking

  • Functions
    • TFTP server
    • DHCP server
    • HaProxy (443 terminates here)
    • Dev/qa/prod Core routing/firewall
    • (multi provider) WAN edge routing/firewall
    • Static/dynamic routing
    • inbound/outbound SMTP handling
    • Caching/scanning (via ClamAV)Web proxy
    • Suricata IDS/IPS

All the above is provided on an active/passive basis via CARP IP with sub 2ms failover.

  • Machines |VM Name | VM ID | Vm Host | Storage Enclosure| Storage Array | |---|---|---|---|---| |pfv-core-rtr01|120|vm1|stor2|tier2vm| |pfv-core-rtr02|xx|vm3|stor1|s1-wwwdb|

DNS/NTP (user/server facing)

We do not expose the core domain controllers (dc2/3) directly to users or servers. Everything flows through pihole. We allow DNS (via firewall rules) to ONLY pihole 1,2 no other DNS is allowed. pihole 1,2 is only allowed to realy to the core dc, then the dc are allowed to relay to the internet (8.8.8.8).

This blocks the vast majority of spyware/trackerware/malware/c2c etc (using the pihole blacklists). DNS filtering is the first line of defense against attackers and far less false positives when doing log review.

  • Functions

    • DNS (with ad filtering) (pihole)
    • NTP
  • Machines

    VM NameVM IDVm HostStorage EnclosureStorage Array
    pihole1101vm3stor1s1-wwwdb
    pihole2103vm1stor2tier2vm

Database layer

All the data for all the things. Everything is clustered, shared service model.

  • Functions

    • Mysql (galera)
    • Postgresql (patroni)
    • ETcd
    • MQTT Brok
    • Rabbitmq
    • Elasticsearch
    • Longhorn
    • K3s control plane
  • Machines

    VM NameVM IDVm HostStorage EnclosureStorage Array
    db1125vm4stor1s1-wwwdb
    db2126vm5stor2tier2vm
    db3127vm1stor2tier2vm

Web/bizops/IT control plane application layer

All the websites for TSYS/Redwood Group live on this infra. It's served up via HAProxy (active/passive on r1/42) in an active/active setup (each node running 50% of workload, capable of 100% for handling node maintenace)

  • Functions

    • All brand properties
    • Data repository (discourse)
    • IT Control plane (job clustering/monitoring/alerting/siem etc)
    • Business operations (marketing/sales/finance/etc)
    • Apache server (for non dockerized applications)
    • k3s worker nodes (we are moving all workloads to docker containers with longhorn PVC)
  • Machines

    VM NameVM IDVm HostStorage EnclosureStorage Array
    www1123vm5stor2tier2vm
    www2124vm4stor1s1-wwwdb

Line of business Application layer

  • Functions

    • Guacamole (serving up rackrental customer workloads, also developer workstations)
    • Webmail (for a number of our domains, we don't use Office 365)
  • Machines

    VM NameVM IDVm HostStorage EnclosureStorage Array
    tsys-dc-02129vm5stor2tier2vm
    tsys-dc-03130vm4stor1s1-wwwdb

Network Security Monitoring

We will be using security onion in some fashion. Looking into that with OpenVAS/Lynis/Graylog as a SIEM/scanner. More to follow soon. It will be a distributed, highly available setup.

Virtual Machines: Non Redundant

VPN

You'll notice VPN missing from the redundant networking list. A few comments on that:

  • We employ a zero trust access model for vast majority of systems
  • We heavily utilize web interfaces/APIs for just about all systems/functionality and secure acces via 2fa/Univention Corporate Server ("AD") and a zero trust model.
  • We do have our R&D systems behind the VPN for direct SSH access (as opposed to through various abstraction layers)
  • We utilize WIreguard (via the ansible setup provided by algo trailofbits). We don't have a redundant Wireguard setup, just a single small Ubuntu VM. It's worked incredibly well and the occasional 90 seconds or so of downtime for kernel patching is acceaptable.
  • Due to ITAR and other regulations, we utilize a VPN for access control. We may in the future, upon appropriate review and approval, setup haproxy with SSH SNI certifcates to route connections to R&D systems directly.
VM NameVM IDVm HostStorage EnclosureStorage Array
pfv-vpn106vm3stor2tier2vm

Physical Surveilance

We can take 90 seconds of downtime for occasional kernel patching and not be processing the surveilance feeds for a bit. Everyone knows that criminals just loop the footage anyway....

VM NameVM IDVm HostStorage EnclosureStorage Array
pfv-nvr104vm5stor2tier2vm

Building automation

We can take 90 seconds of downtime for occasional kernel patching and wait to turn on a light or whatever.

VM NameVM IDVm HostStorage EnclosureStorage Array
HomeAssistant116vm3stor2tier2vm

Sipwise

We can take 90 seconds of downtime for occasional kernel patching, and have the phones "stop ringing" for that long.

VM NameVM IDVm HostStorage EnclosureStorage Array
sipwise105vm4stor1s1-wwwdb

Online CA (Intermeidate to offline root)

We can take 90 seconds of downtime for occasional kernel patching.

We serve the CRL and other "always on" SSL related bits via cloudflare ssl toolkit in docker using the web/app layer over HTTP(S) and it's fully redundant.

This VM is only used occasionally to issue long lived certs or perform needed maintenance.

It could be down for weeks/months without issue.

It's using XCA for administration and talking to the db cluster. It is locked to vm3, because we pass through a Nitrokey HSM, works wonderully.

VM NameVM IDVm HostStorage EnclosureStorage Array
pfv-ca131vm3stor1s1-wwwdb

Operations/administration/management (OAM)

This is the back office IT bits.

  • Functions

    • librenms (monitoring/alerting/long term metrics)
    • netdata (central dashboard)
    • upsd (central dashboard)
    • rundeck (internal orchestration only)
    • sshaudit
    • lynis
    • crash dump server
    • openvas
    • etc
    VM NameVM IDVm HostStorage EnclosureStorage Array
    pfv-toolbox121vm3stor2tier2vm

Storage Infrastructure

  • We keep it very simple and utilize TrueNAS Core on Dell PowerEdge 2950 with 32gb ram.
  • We run zero plugins.
  • We have a variety of pools setup and served out over NFS to the 10.251.30.0/24 network
  • No samba, just NFS
  • Utilize built in snapshots/replication for retention/backup

Virtualization Infrastructure

  • We keep it very simple and utilize Proxmox on a mix of :
    • Dell Optiplex (i3/i7) (all with 32gb ram)
    • Dell PowerEdge (dual socket, quad core xeon) (all with 32gb ram)
    • Dell Precision system (i7) (16gb ram) (with nvida quadaro card passed through to kvm guest (either windows 10 or Ubuntu Server 20.04 depending on what we need todo)
    • We run the nodes with single power supply and single OS drive.

Vm node failure is expected (we keep the likelihood low with use of thumb drives with syslog set to only log to the virtualized logging infra), and we handle the downtime via the redundancy we outlined above (by using virtual machines spread across hypervisors / arrays / enclosures ) and redundancy happens at the application level).

Restoring a vritual server node would take maybe 30 minutes

(plug a new thumb drive, re-install, join cluster).

In the meantime the vm has auto migrated to another node using proxmox HA functionality (if it's an SPOF VM).

Overall system move to production status

HostnameOSSECRundeckNetdatalibrenms monlibrenms logDNS(x)DPNTPSlackLyrisSCAPAuditdOpenVASoxidized
Pfv-vmsrv-01YYYYYYYYN/A
Pfv-vmsrv-02YYYYYYYYN/A
Pfv-vmsrv-03YYYYYYYYN/A
Pfv-vmsrv-04YYYYYYYYN/A
Pfv-vmsrv-06YYYYYYYYN/A
Pfv-time1YYYYYYYN/A
Pfv-stor1N/AN/AN/AYYYxN/AN/AN/A
Pfv-stor2N/AN/AN/AYYYxN/AN/AN/A
Pfv-consrv01N/AN/AN/AYYYYxN/AN/AN/A
Pfv-core-sw01N/AN/AN/AYYYYxN/AN/A
Pfv-core-ap01N/AN/AN/AYN/AYYxN/AN/A
Pfv-lab-sw01N/AN/AN/AYYYx
Pfv-lab-sw02N/AN/AN/AYYYYx
Pfv-lab-sw03N/AN/AN/AYYYx
Pfv-lab-sw04N/AN/AN/AYYYYx
3dpsrvYYYYYYN/AYN/A
Pfv-core-rtr01N/AN/AN/AYYYYxN/AN/A
Pfv-core-rtr02N/AN/AN/AYYYYxN/AN/A
tsys-dc-01YYYYYYY
tsys-dc-02YYYYYYY
tsys-dc-03YYYYYYY
Tsys-dc-04YYYYYYYN/A
pihole1YYYYYYYN/A
pihole2YYYYYYYN/A
pfv-toolboxYYYYYYYN/A
caYYYYYYYN/A
www1YYYYYYY
www2YYYYYYY
www3YYYYYYY
db1YYYYYYY
db2YYYYYYY
db3YYYYYYY

Authentication at TSYS

Password Security

General

  • Bitwarden is used to store all passwords.
  • Authentication to Bitwarden is only possible with 2fa (yubikey or pin+password).
  • Senior leadership uses 3fa (PIN code, yubikey, yubieky static password).
  • 99% of systems at TSYS are 2fa only (Rundeck is not but is mitigated through requiring a seperate admin account (and that will soon be only via a privilieged access account model with daily expiring passwords))

Shared Passwords

We minimize the use of shared passwords. When we do use them (for example with external vendors) , We utilize bitwarden for secure storing/sharing of passwords.

Prviliged Accounts

We have a separate LDAP account from our day to day LDAP account for any privileged operations.

CEO/CTO have access to SAW-Master (secure admin workstation) CEO/CFO (and designees) have accss to FAW-Master (finance admin workstation) User Creation / Deletion We utilize Univention Corporate Server for all privileged system authentication at TSYS. It is not used for line of business applications (like discourse/rackrental/esign).

[1] is the vendor documentation on user management.

We have a number of groups defined and membership will depend on the role, access needs etc.

We use a convention of mr for mortal accounts (and later hires) and short names for early hires/immoratal accounts.

VPN Endpoint Creation / Deletion Login via RDP to pfv-rrsvr.pfv.turnsys.net as localuser Start the XCA application via desktop shortcut Copy/paste password from keepass entry XCA - Database in SAW-Master Run through csr/sign process Export key/cert Connect to https://corpvpn-r1.turnsys.net/system_certmanager.php?act=new Import the key/cert https://corpvpn-r1.turnsys.net/vpn_openvpn_export.php Select roadwarrior vpn TCP:443 Under export for the desired cert, select Standard Configuration - Archive

TSYS Group - HQ data center documentation - cooling

Introduction

Cooling is a critical component of any data center. It is often the dominate consumer of energy.

We keep our data center at about 70 degrees F.

Make / model

We have a

  • HiSense Portable Air Conditioner (standalone) the manual lists several possible models, unsure which exact one we have. It was about 700.00 at Lowes with a multiple year replacement warranty.

which is rated for:

  • 15,000 BTU

It draws about 7 amps when the compressor is running.

With our heat load, the compressor does cycle on/off ,so it keeps cool pretty efficiently from an energy perspective.

Tips/tricks

  • Extended exhaust house

We moved the air conditioner to the front of the racks (cold aisle) and extended the exhaust hose todo so.

  • Heat barrier

We deployed a cardboard heat barrier above the racks, to keep hot air behind the racks. We also have a vent duct (made of cardboard) to a panel we removed above the doorway.

  • Insulation

    • Insulate the exhaust hose!
  • Air movers

    • We have a tower fan in the hot row (back), pushing the heat towards the duct.
    • We have two small blowers in the cold row (front) helping "kick back" the air blowing from the HiSense.

Instrumentation

We use:

  • temper usb probe
  • lm-sensors
  • DRAC

all consumed via SNMP by librenms to monitor/alert on temperature.This lets us find hot/cold spots across the racks and make any necessary adjustments.

TSYS Group - HQ data center documentation - power

Introduction

This article covers the electrical power setup for the HQ data center. We've grown it over time, bringing online more and more protected capacity as we got good deals on UPS/batteries etc and have added additional load.

Circuits

The server room is fed by two 20amp circuits:

  • Circuit 8a serving:

    • dedicated air conditioner (see our cooling article for details on that)
    • vm(1-3) servers
    • network equipment
    • overhead and led lighting
  • Circuit (xx) serving:

    • pfv-stor1/stor2 enclosures and drive arrays
    • vm(4-6)

(future plan)

  • Bring in circuit (xx) (currently serving front porch outlet) serving:
    • rackrental rentable equipment

Outlets

We have upgraded the standard 15amp outlets that came with the facility ,to 20amp outlets. This allows us to run a full 15amps sustained load (on 20amp circuits)

Surge Protectors

We utilize GE surge protectors , rated for 15amps. They are about $50.00 apiece. These are placed upstream of the UPS units (between the wall outlet and the UPS extension cord).

Extension cords

We do not have outlets close to the UPS stack. We utilize 15amp rated extension cords (from the surge protectors) to feed the UPS inputs.

UPS units

Prod

  • UPS2
    • Make/Model: Dell UPS Rack 1000W LV
    • PDU served:
      • UMPDU1
    • Protected load:
      • pfv-stor1/pfv-stor2 (Dell PowerEdge 2950s)
      • backup USB drives and USB hub
      • external scratch/backup arrays
    • Protected Load Runtime: 12 minutes

UPS5

  • CyberPower UPS (details tbd)
  • PDU served:
    • UMPDU4
    • BenchPDU
    • Cameras
  • Protected load:
    • pfv-vm1/2/3
    • pfv-time1
    • pfv-labsw*
    • pfv-core-ap01
    • pfv-coresw-01
    • pfv-labsw*
  • Protected Load Runtime: 12 minutes

UPS7

  • PDUs served: n/a
  • Monitoring server: n/a (un-monitored ups)
  • Protected load: locking relay for server room

R&D

UPS1

UPS3

UPS4

UPS6

PDU

Unmanaged PDUs

Managed PDUs

TSYS HQ LAN

PFV WAN

Introduction

Provider

  • AT&T Uverse
  • Business DSL (fiber overbuild is projected for late 2021)
  • 60 down/20 up is what I see in speed tests

Subnets

  • 10.251.0.0/16 (See phpipam for all the particulars)

Diagram

Security considerations

Availaiblity considerations

TSYS Group Web Application Runtime Layer Q2 2021 Project PLan

Introduction

The TSYS Group needs a web application runtime layer for it's myriad of applications.

Broad Requirements for runtime layer

  • No single point of failure * High availability/auto recovery for containers * Distributed/replicated persistent storage for containers

Delivery schedule and compensation

  • Maximum equity offered : 2.5% upon completion of all milestones by deadline * Targeted completion of this project is July 4th 2021 * all equity will be fully vested at grant time. * Only consequence of non completion is no equity will be granted (but you would keep any equity already granted for completed milestones ). * Contractor is expected to work independently. TSYS Technical Operations Team is available in Discord for any requirements/access granting/architecture questions. * TSYS Technical Operations Team is not available to assist with implementation, hence the equity offer to an outside contactor.

Project milestones / deliverables / major areas

storage

0.5% equity

Replicated storage that fulfills the persistent volume claim of docker containers.

Deployed on db1/2/3 virtual machines.

Using something such as longhorn but we are open to anything that is production stable.

container runtime, control plane, control panel

0.5% equity

  • Kubernetes load balancer , something such as metallb, but open to other options. Only TCP load balancing is needed, all intelligence (certs/layer 7 etc) is handled at the routing/network layer already * Kubernetes runtime environment (workers and control plane ), Something like k3s from Rancher Labs * Kubernetes control panel authenticating to LDAP , Something like Rancher.

Control plane will be deployed on db1/2/3

Workers will be deployed on www2/3 and 1 (1 is currently the production server, so would be added in last)

Core container functionality (running as containers on the platform):

0.5% equity

  • docker registry * IAM * API gateway * Jenkins * all the above installed as containers running on the kubernetes runtime. * all the above configured for LDAP authentication * all the above no other configuration of the components would be in scope

PAAS

1% equity

  • blue/green and other standard deployment methodologies * able to auto deploy from ci/cd ) * orchestrate all of the primitives (load balancer, port assignment etc) (docker-compose target? helm chart? is Rancher suitable?)

This milestone is the most complex and will require discussion and further clarification. We can do so when we get to this point and see how far along contractor has come and time remaining etc .

Things not in scope

LDAP backend

Known Element Enterprises LLC utilizes Univention Corporate Server to provide Active Directory compatible services to the TSYS Group. This is up and running in production and all applications and systems utilize it for AAA.

You will have access to the UCS control plane and are expected to create and document any service accounts , groups etc needed for the services you deploy.

Data backend (RDS)

Known Element Enterprises LLC utilizes it's own proprietary database as a service solution to provide an HA cluster of:

  • MySQL * redis * memcached * postgreql * etcd * MQTT * mongodb * elasticsearch

If the above isn't sufficient, (we don’t have zookeeper for example ) , you would work with the Technical Operations Platform Team to deploy whatever additional clustered data store may be required.

You’ll be granted access to the database as a service systems and be expected to create and document any databases you need along with any needed accounts.

Applications running on the platform

TSYS Group Technical Operations team will deploy all applications onto the platform.

You are responsible for providing a demo of the whiteboard application showing storage and node redundancy.

Secrets store

Known Element Enterprises LLC utilizes bitwarden/envwarden for all secret storage. It provides a REST API and we have existing wrapper code to populate environment variables as needed. You may (if needed ) deploy a secrets store for the deliverables (such as Ansible Vault, Hashicopr Vault etc) if bitwarden/envwarden isn't sufficient.

General notes

  • It is up to contractor how todo infrastructure as code for the deliverables. Ansible might have the best coverage perhaps. Teraform is a solid contender. * All work must be put into gitea repositories and mirrored to github. You can use the mirror script found at: https://github.com/ReachableCEO/notes-public/blob/master/code/utils/gitMirror.sh with aliases (modify as desired of course) ```

    • lpom='git add -A :/ ; git commit -va'
    • gpom='git push all master'
    • tesla='lpom;gpom'
  • Also all work at contractor discretion can be live screen casted, recorded , blogged about , put on GitHub , talked about in any format etc. We actively support and encourage it! Also feel free to build out in parallel on any other cloud provider . * All IP must be licensed AGPL v3, with copyright dual assigned to both the contractor and Known Element Enterprises LLC * on day 1, contractor will have privileged access to :

    • opnsense
    • UCS
    • www 1/2/3
    • db 1/2/3

so contractor can be completely self sufficient

A suggested prescriptive technical stack / Work done so far

Followed some of this howto: https://rene.jochum.dev/rancher-k3s-with-galera/

Enough to get k3s control plane and workers deployed:

root@db1:/var/log/maxscale# kubectl get nodes -o wide
NAME   STATUS   ROLES                  AGE   VERSION        INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
db2    Ready    control-plane,master   30d   v1.20.4+k3s1   10.251.51.2   <none>        Ubuntu 20.04.2 LTS   5.4.0-70-generic   containerd://1.4.3-k3s3
db3    Ready    control-plane,master   30d   v1.20.4+k3s1   10.251.51.3   <none>        Ubuntu 20.04.2 LTS   5.4.0-70-generic   containerd://1.4.3-k3s3
db1    Ready    control-plane,master   30d   v1.20.4+k3s1   10.251.51.1   <none>        Ubuntu 20.04.2 LTS   5.4.0-70-generic   containerd://1.4.3-k3s3
www1   Ready    <none>                 30d   v1.20.4+k3s1   10.251.50.1   <none>        Ubuntu 20.04.2 LTS   5.4.0-70-generic   containerd://1.4.3-k3s3
www2   Ready    <none>                 30d   v1.20.4+k3s1   10.251.50.2   <none>        Ubuntu 20.04.2 LTS   5.4.0-70-generic   containerd://1.4.3-k3s3
root@db1:/var/log/maxscale# 

and a bit of load balancing setup going:

fenixpi% kubectl get pods -A -o wide
NAMESPACE        NAME                                        READY   STATUS             RESTARTS   AGE   IP            NODE   NOMINATED NODE   READINESS GATES
metallb-system   speaker-7nsvs                               1/1     Running            10         30d   10.251.51.2   db2    <none>           <none>
kube-system      metrics-server-86cbb8457f-64ckz             1/1     Running            18         16d   10.42.2.23    db1    <none>           <none>
kube-system      local-path-provisioner-5ff76fc89d-kcg7k     1/1     Running            34         16d   10.42.2.22    db1    <none>           <none>
metallb-system   controller-fb659dc8-m2tlk                   1/1     Running            12         30d   10.42.0.42    db3    <none>           <none>
metallb-system   speaker-vfh2p                               1/1     Running            17         30d   10.251.51.3   db3    <none>           <none>
kube-system      coredns-854c77959c-59kpz                    1/1     Running            13         30d   10.42.0.41    db3    <none>           <none>
kube-system      ingress-nginx-controller-7fc74cf778-qxdpr   1/1     Running            15         30d   10.42.0.40    db3    <none>           <none>
metallb-system   speaker-7bzlw                               1/1     Running            3          30d   10.251.50.2   www2   <none>           <none>
metallb-system   speaker-hdwkm                               0/1     CrashLoopBackOff   4633       30d   10.251.51.1   db1    <none>           <none>
metallb-system   speaker-nhzf6                               0/1     CrashLoopBackOff   1458       30d   10.251.50.1   www1   <none>           <none>

Beyond that, it's greenfield.

Phase 1 (core capabilities )

  • Storage for persistent volume claims : https://longhorn.io/ * Control plane / workers for k8s: https://k3s.io/ * Control panel for k8s: https://rancher.com/products/rancher/ * Network load balancer : https://metallb.universe.tf/

Phase 2 (container and application support infrastructure)

Phase 3 (application deployment support infrastructure)

Need to research this more.

Some kind of PASS that would orchestrate storage , HA, network IP/ port .

Known Element Enterprises LLC already have haproxy and letsencrypt setup and in production use. All DNS is wild carded to the HAProxy IP. So any service can be spun up just by provisioning a cert and a VIP/ACL.

Some possibilities:

TSYS Group - Engineering Documentation - Visual Studio Code Environment Setup Guide

Introduction

This is the TSYS Visual Studio Code setup guide. It covers how to setup VsCode for all aspects of TSSY Group.

We have a very complex total stack, but don't despair, you will only need a small subset of this.

Which subset of course depends on what part of the TSYS mission you are supporting!

Environmental considerations/assumptions

  • Charles setup is the most comprehensive, as he is the CTO and has/does need to develop for all pieces of the stack/products.

  • Do not just blindly follow this guide! Pick the pieces you need for your work. If you have any questions, ask in Discord or post to Discourse.

  • Working against a remote server/container/k8s cluster over SSH via VsCode Remote

  • VsCode Remote Dev is heavily utilized (almost if not exclusively)

  • Source code resides in home directory on the server farm, but is edited "locally" on your workstation with VsCode (Remote)

  • Using TSYS self hosted Gitea git instance

  • Using TSYS self hosted Jenkins CI

  • docker/kubectl commands are present and configured to run against the cluster (and you are connected to the VPN)

  • Developing in Windows 10/Mac OSX/Linux with a GUI environment running native VsCode (CNW daily driver is a raspberry pi 4 with 8gb ram to help ensure lowest common denominator support/good performance)

  • Using Chrome web browser (firefox/safari may work, but are not supported at all)

  • Developing primarily at the "git push, magic happens" abstraction layer

  • Need to occasionally inspect/debug the magic at various stages of the pipeline

  • Need to frequently debug running code on a variety of targets (pi/arduino etc)

  • All text documentation is written in Markdown and is posted to Git/Discourse as Markdown

  • (tbd soon, actively experimenting) All diagrams are produced using (blockdiag? uml? markdown? all the above? extension(s))

Short version

very soon (may 2021) you'll have two options for EZ stack deployment for your product development environment :

  1. docker pull TSYSVSC and use with https://code.visualstudio.com/docs/remote/containers

  2. Browse to https://desktop.turnsys.com and get a full engineering stack for whatever product you are working on.

Read on to understand the pieces and particulars in case you want to build your own setup.

Requirements and dependencies

Here is the tool and language requirements of all the TSYS engineering projects/programs/products.

Software Programs Used

You'll need to setup some subset of the tools below todo your job (in addition to VsCode), and also use VsCode with various TSYS hosted services like Gitea, Jenkins and Docker/K3s.

Setup of external tools is outside the scope of this document. For guidance on tool setup, please see the following links:

This software has two modes of deployment:

  • downloaded from the vendor and setup on your physical workstation (used for dev/testing/experimenting)
  • downloaded from the /subo directory and ran on your physical workstation or run from the /subo directory on a virtual workstation you login to remotely

The software that is built/deployed in /subo is the only version approved for production use.

The exception to that is if it has an OTS notation next to it's name, in which case you can use the latest stable version from the vendor.

Once you've setup your needed external tools, return to this document and continue with setup of VsCode as needed to work with the tooling you installed.

ProgramUsed ByLinkProduct Scope
obs (OTS)Allhttps://obsproject.com/All
bitwaden (OTS)Allhttps://bitwarden.com/All
docear (OTS)Allhttps://docear.org/All
polar (OTS)Allhttps://getpolarized.io/All
calibre (OTS)Allhttps://calibre-ebook.com/All
vym (OTS)Allhttp://www.insilmaril.de/vym/All
argouml (OTS)Allhttps://github.com/argouml-tigris-org/argoumlAll
bonita (OTS)Allhttps://www.bonitasoft.com/All
Docker Desktop (OTS)Allhttps://www.docker.com/products/docker-desktopAll
Pandoc (OTS)Allhttps://pandoc.org/All
EsimTeam-HwEnghttps://esim.fossee.in/MorseFlyer (avionics), MorseSkynet
KicadTeam-HwEnghttps://gitlab.com/kicad/code/kicadMorseFlyer (avionics), MorseSkynet
LibrePCBTeam-hwEnghttps://librepcb.org/MorseFlyer (avionics), MorseSkynet
NgSpiceTeam-HwEnghttp://ngspice.sourceforge.net/resources.htmlMorseFlyer (avionics), MorseSkynet
qrouterTeam-HwEnghttp://opencircuitdesign.com/qrouter/MorseFlyer (avionics), MorseSkynet
GerbyTeam-HwEnghttp://gerbv.geda-project.org/MorseFlyer (avionics), MorseSkynet
camoticsTeam-MechEnghttps://camotics.org/MorseFlyer (avionics), MorseSkynet
GprMaxTeam-HwEnghttps://github.com/gprMax/gprMaxMorseFlyer (avionics), MorseSkynet
SciKit-RFTeam-HwEnghttps://scikit-rf.readthedocs.io/en/latest/MorseFlyer (avionics), MorseSkynet
FloraTeam-HwEng/SwEnghttps://flora.aalto.fi/MorseFlyer (avionics), MorseSkynet
inkscapeTeam-HwEng/MechEnghttps://inkscape.org/MorseFlyer, MorseSkynet
gerber2graphtecTeam-HwEnghttps://github.com/pmonta/gerber2graphtecMorseFlyer, MorseSkynet
gerber2graphtecTeam-HwEnghttps://github.com/colinoflynn/gerber2graphtec/>MorseFlyer, MorseSkynet
BlenderTeam-MechEng/HwEnghttps://www.blender.org/MorseFlyer, MorseSkynet
FreecadTeam-MechEng/HwEnghttps://github.com/FreeCADMorseFlyer, MorseSkynet
LibrecadTeam-MechEng/HwEnghttps://librepcb.org/MorseFlyer, MorseSkynet
SolvespaceTeam-MechEnghttps://solvespace.com/index.plMorseFlyer, MorseSkynet
CuraTeam-MechEnghttps://ultimaker.com/software/ultimaker-curaMorseFlyer (envelope/parafoil/airframe)
Cubit ToolkitTeam-MechEnghttps://cubit.sandia.gov/MorseFlyer (envelope/parafoil/airframe)
ParaviewTeam-MechEnghttps://www.paraview.org/MorseFlyer (envelope/parafoil/airframe)
OctaveTeam-MechEnghttps://hg.savannah.gnu.org/hgweb/octaveMorseFlyer (envelope/parafoil/airframe)
OpenVSPTeam-MechEnghttp://openvsp.org/MorseFlyer (envelope/parafoil/airframe)
OneLABTeam-MechEnghttp://onelab.info/MorseFlyer (envelope/parafoil/airframe)
SciLabTeam-MechEnghttps://www.scilab.org/MorseFlyer (envelope/parafoil/airframe)
Warp3dTeam_MechEnghttp://www.warp3d.net/MorseFlyer (envelope/parafoil/airframe)
CodeAsterTeam-MechEnghttps://www.code-aster.org/V2/spip.php?rubrique2MorseFlyer (envelope/parafoil/airframe)
VirtualSatelliteTeam_MechEnghttps://github.com/virtualsatelliteMorseFlyer (envelope/parafoil/airframe)
NasaTrickTeam_MechEnghttps://github.com/nasa/trickMorseFlyer (envelope/parafoil/airframe)
NasaTran95Team_MechEnghttps://github.com/nasa/trickMorseFlyer (envelope/parafoil/airframe)
rstudio (OTS)Team-HwEnghttps://www.rstudio.com/MorseFlyer (envelope/parafoil/airframe)
DbEaver(OTS)Team-SwEnghttps://dbeaver.io/MorseFlyer(avionics), RacKRental.net, HFNOC
CUDA SDKTeam-HwEnghttps://developer.nvidia.com/cuda-zoneMorseFlyer (envelope/parafoil/airframe)
Microsoft R (OTS)Team-HwEnghttps://mran.microsoft.com/openMorseFlyer (avionics)
open 3d model viewerTeam-MechEnghttps://acgessler.github.io/open3mod/MorseFlyer (envelope/parafoil/airframe)
PHP runtimeTeam-SwEnghttp://devilbox.org/RackRental
postman (OTS)Team-SwEnghttps://www.postman.com/RackRental/HFNOC
xilinxTeam-HwEnghttps://www.xilinx.com/MorseSkynet
sdrsharpTeam-HwEnghttps://www.rtl-sdr.com/tag/sdrsharp/MorseSkynet
gnuradioTeam-HwEnghttps://www.gnuradio.org/MorseSkynet
XilinxTeam-HwEnghttps://www.xilinx.com/support/download.htmlMorseSkynet
YoSysTeam-HwEnghttp://www.clifford.at/yosys/MorseSkynet
graywolfTeam-HwEnghttps://github.com/rubund/graywolfMorseSkynet
chiselTeam-HwEnghttps://www.chisel-lang.org/MorseSkynet
embitz (OTS)Team-SwEng/HwEnghttps://www.embitz.org/MorseSkynet
android studio (OTS)Team-SwEnghttps://developer.android.com/studioMorsePod
grass gis (OTS)Team-SwEnghttps://grass.osgeo.org/HFNOC
qgis (OTS)Team-SwEnghttps://qgis.org/en/site/HFNOC
udig (OTS)Team-SwEnghttp://udig.refractions.net/HFNOC
OpenGribsTeam-SwEnghttps://opengribs.org/en/HFNOC
worldwind (OTS)Team-HwEnghttps://worldwind.arc.nasa.gov/HFNOC
sweethome3d (OTS)Team-MechEnghttp://www.sweethome3d.com/MorseCollective
jxplorer (OTS)Team-IThttp://jxplorer.org/HFNOC/HFNFC
ghidra (OTS)Team-SwEnghttps://ghidra-sre.org/ALl (SDLC)
openscap (OTS)Team-IThttps://www.open-scap.org/tools/scap-workbench/All (SDLC)
metasploitTeam-SwEnghttps://github.com/rapid7/metasploit-framework/wiki/Nightly-InstallersAll (SDLC)
OWASP Threat DragonTeam-SwEnghttps://owasp.org/www-project-threat-dragon/All (SDLC)

Languages Used

LanguageUsed ByProduct Scope
bashTSYS wideAll
MarkdownTSYS wideAll
dockerfile/docker composeTSYS wideAll
helm chartsTSYS wideAll
YAMLTSYS wideAll
c/c++Team-SwEngMorseFlyer
JavaTeam SwEngMorseTrackerHUD,MorseTracker
javascriptTeam SwEngMorseTrackerHUD
Some geo spatial stuffTeam SwEngMorseFlyer (avionics)
GerberTeam HwEngMorseSkynet, MorseFlyer (avionics)
tcl/tkTeam HwEngMorseSkynet
XilinxTeam HwEngMorseSkynet
CUDATeam MechEngMorseFlyer (envelope/airframe)
python (Jupyter and stand alone)Team MechEngMorseFlyer (envelope/airframe)
octaveTeam MechEngMorseFlyer (envelope/airframe)
RTeam MechEngMorseFlyer (envelope/airframe)
PHPTEam-SwEngRackRental.net , HFNOC/HFNFC
OpenFAASTeam-SwEngRackRental.net
RubyTeam-SwEngAll (as part of SDLC testing)

Deployment Targets

TargetUsed ByProduct Scope
Raspberry Pi (cross compiled)Team-SwEngMorseFlyer (Avionics)
Arduino (cross compiled)Team-SwEngMorseFlyer (Avionics)
FreeRTOS (cross compiled)Team-SwEngMorseFlyer (Avionics)
TSYS Web Farm (lots of PHP (wordpress etc))Team-WebEngRackRental.net, HFNOC, HFNFC
Subo pi farm (multi arch) Docker / k3s (and balena)Team-SwEngMorseFlyer (Avionics), MorseSkynet
OpenMCT farm (java/micro services)Team-SwEngMorseTracker/MorseTrackerHUD
TSYS K3S sandbox/dev/prod clustersAll teamsAll
Jenkins build pipelinesAll teamsAll

General setup

These are steps you need to take before starting development in earnest.

Linux (or at least a mostly linux (WSL/mobaxterm)) environment is presumed for all the below.

You may well find GUI replacements and use them, especially on Windows/MACOSX. They are not supported in any way.

  • Setup gitea
    • Login once to https://git.turnsys.com so you can be added to the appropriate repos/teams/orgs.
    • Customize any profile etc settings that you wish.
  • Setup SSH
    • Setup SSH key
    • Add SSH public key to gitea
  • Setup git
    • For all git users:
      • $ git config --global user.name "John Doe"
      • $ git config --global user.email johndoe@example.com
      • Setup git lg : git config --global alias.lg "log --color --graph --pretty=format:'%Cred%h%Creset -%C(yellow)%d%Creset %s %Cgreen(%cr) %C(bold blue)<%an>%Creset' --abbrev-commit"
    • for zsh users (and you really should use zsh/oh-my-zsh :)
      • git config --add oh-my-zsh.hide-status 1
      • git config --add oh-my-zsh.hide-dirty 1

Plugins - Team-*

The plugins documented here are known to work, and are in active/frequent use by Charles as CTO as he hacks on the stack. Other options exist for almost all the below. If you find something that works better for you, use it!

Consider the below as a suggested/supported baseline.

General Tooling

Docker / k8s

Git

Remote development/debug

THis is used regularly by Techops and Charles and is well supported.

Cross Compile / Remote Debug

Markdown

Bash

Plugins - Team-SWEng

YAML

C/C++

Arduino/Seeduino

-_https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.vscode-arduino

CUDA

TBD. Pull requests welcome.

Java

PHP

Geospatial

Python

Plugins - Team-MechEng

Octave

TBD. Pull requests welcome.

R

TBD. Pull requests welcome.

Jupyter

STL

G-code

TBD. Pull requests welcome.

Gerber

TBD. Pull requests welcome.

Charles Workstation Build Guide

The dotfiles of a 20+ year IT professional turned (reachable) CEO of an emerging conglomerate . The mind of a madman! # New build guide for charles workstation

Introduction

In 01/2021 , I purchased a Raspberry Pi as my daily driver. This document is my workstation manual. Prior to that, I was using an iPad Mini with external HDMI monitor as a daily driver (with rdp to an x86 vm). I did that for about 1.5 years. Then I wanted dual monitors again and the rpi ecosystem had matured enough to use as a daily driver.

I am the founder and CEO of TSYS Group. In my role, I"ve done everything from business ops, to system administration to software/hardware engineering tasks.

The software mentioned here is a long list, reflecting the myriad of tasks/projects I may engage with on a daily basis.

You'll only need a subset of these tools, don't despair!

I hope this document is useful to everyone at TSYS who wants to maximize their productivity. We support Linux/OSX/Windows 10 for workstation use and these programs should work on all three platforms (for the most part). I hope it's also useful to other founders and hackers who have many passions/interests and want to do it all. Now you can!

I have written this document over several weeks, and I keep it open at all times. This allows for very low latency / overhead recording of moves/adds/changes as I go about my day.

Workstation details - RPI4 8Gb

Quick note, 85% or more of my daily driver/workstation use (email/coding/research/browsing/document creation/discord/media editing/etc) is on a raspi4. The rest is done via an RDP session to an x86 vm for the few things that have x86 dependencies or need 64bit os (64bit on pi isn't yet fully ready in my opinion).

I detail the vm setup later in the document in the section: Workstation details - x86 vm.

  • Operating System: Fenix Linux
  • Hardware:
    • Raspberry Pi 4 with 8gb RAM
    • Case : Argone case/fan/PCB
    • Monitors: Dual Dell 24" monitors (IPS)
    • Chair: Ikea MARKUS Office Chair: https://www.ikea.com/us/en/p/markus-office-chair-vissle-dark-gray-90289172/
    • Accessories :
      • Belkin Powered USB Hub (for plugging in thumb drives, data acquisition devices / other random usb bits)
      • IOGear card reader
      • Security Dongle: Yubikey 4 OTP+U2F+CCID
      • Keyboard: Matias Backlight Keyboard https://www.matias.ca/aluminum/backlit/
      • Tablet: iPad Mini 5th Gen (document on iPad setup for engineering coming soon)
      • Headphones: JBL Over Ear
      • Mouse: Apple Magic Mouse 2

Out of box tweaks and basic setup

  1. connect usb keyboard and mouse , switch to the windows 10 desktop
  2. Setup bluetooth keyboard
  3. connect to wifi
  4. fix date/time via ntpdate (ntpdate 10.251.37.5)
  5. apt-get update ; apt-get -y full-upgrade
  6. add vi mode to /etc/profile (heathens by default!)
  7. setup password less sudo
  8. clone dotfiles repo
  9. enable i2c access via raspi-config
  10. setup fan daemon https://gitlab.com/DarkElvenAngel/argononed.git
  11. Setup pin+yubi long string for password on the no10 user
  12. (later) run buildWorkstation.sh

Virtual Workspace Details

  • Desktop 1: Browsing/Editing/Shell (chrome / VsCode / Konsole / Remmina )
  • Desktop 2: Comms (discourse/discord/irc etc/thunderbird/mutt)
  • Desktop 3: Long Running: (calibre/recol/etc)

Repositories to add

in /etc/apt/sources.list.d

cat docker.list 
deb [arch=armhf] https://download.docker.com/linux/raspbian buster stable
cat backports.list 
deb [trusted=yes] http://ftp.debian.org/debian buster-backports main
curl -sL https://deb.nodesource.com/setup_15.x | sudo -E bash -
cat yarn.list 
deb [signed-by=/usr/share/keyrings/yarnkey.gpg] https://dl.yarnpkg.com/debian stable main
cat recoll-rbuster.list 
deb [signed-by=/usr/share/keyrings/lesbonscomptes.gpg] http://www.lesbonscomptes.com/recoll/raspbian/ buster main
deb-src [signed-by=/usr/share/keyrings/lesbonscomptes.gpg] http://www.lesbonscomptes.com/recoll/raspbian/ buster main

Packages to install

First run apt-get update to ensure you are using packages from the above repos and not the stock packages. Do any needed gpg key imports.

(almost!) All the packages

For pulling in secrets (which allows me to share my dotfiles safely):

apt-get -y install \
kicad librecad gimp blender shellcheck \
ruby-full offlineimap zsh vim thunderbird enigmail \
kleopatra zsh-autosuggestions zsh-syntax-highlighting screen \
mtr rpi-imager cifs-utils grass cubicsdr arduino jupyter-notebook \
dia basket vym code wings3d flatpak wireguard gnuplot \
pandoc python3-blockdiag  texlive-fonts-extra \
spice-client-gtk spice-html5 virt-viewer \
ripgrep recoll poppler-utils  abiword wv antiword  unrtf  \
libimage-exiftool-perl xsltproc freecad davmail kphotoalbum opensc \
yubikey-manager yubikey-personalization yubikey-personalization-gui \
openshot kdenlive pitivi inkscape scribus scdaemon seafile-gui qgis \
octave nodejs gpx2shp libreoffice calligra netbeans sigrok \
nodejs audacity wireshark nmap tcpdump zenmap etherape ghostscript \
geda ngspice graphicsmagick codeblocks scilab calibre paraview \
gnuradio build-essential libimobiledevice-utils libimobiledevice-dev \
libgpod-dev python3-numpy python3-pandas python3-matplotlib \

See below sections for things that aren't deployed via apt-get,

General packages for the modern knowledge worker who is tech/security savvy

apt-get -y install \
ruby-full offlineimap zsh vim thunderbird kleopatra zsh-autosuggestions \
zsh-syntax-highlighting screen mtr rpi-imager cifs-utils dia basket \
vym davmail kphotoalbum libreoffice calligra\
enigmail opensc scdaemon nodejs calibre wireguardi \
libimobiledevice-utils libimobiledevice-dev libgpod-dev \
yubikey-manager yubikey-personalization yubikey-personalization-gui 
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
cargo install mdbook
npm install -g @bitwarden/cli

R&d/creative workstation packages

apt-get -y install \
kicad librecad freecad qgis audacity gpsbabel arduino \
sigrok netbeans scilab blender gimp grass \
openshot kdenlive pitivi inkscape scribus build-essential \
geda ngspice gnuradio cubicsdr flatpak\
shellcheck code codeblocks scilab paraview wings3d \
python3-numpy python3-pandas python3-matplotlib \
jupyter-notebook
flatpak install flathub org.kde.krita

For using the bitscope oscilloscope

wget http://bitscope.com/download/files/bitscope-dso_2.8.FE22H_armhf.deb
wget http://bitscope.com/download/files/bitscope-logic_1.2.FC20C_armhf.deb
wget http://bitscope.com/download/files/bitscope-meter_2.0.FK22G_armhf.deb
wget http://bitscope.com/download/files/bitscope-chart_2.0.FK22M_armhf.deb
wget http://bitscope.com/download/files/bitscope-proto_0.9.FG13B_armhf.deb
wget http://bitscope.com/download/files/bitscope-console_1.0.FK29A_armhf.deb
wget http://bitscope.com/download/files/bitscope-display_1.0.EC17A_armhf.deb
wget http://bitscope.com/download/files/bitscope-server_1.0.FK26A_armhf.deb

dpkg -i *.deb
apt-get -f install
dpkg -i *.deb

Full text search packages

apt-get -y install \
ripgrep recoll poppler-utils  abiword wv antiword  \
unrtf libimage-exiftool-perl xsltproc  

Document production packages

apt-get -y install \
pandoc python3-blockdiag  texlive-fonts-extra 

chrome

  1. launch chrome
  2. change language to english
  3. enable dark mode (https://www.pocket-lint.com/apps/news/google/149866-how-to-enable-dark-mode-for-google-chrome)
  4. login to pwvault.turnsys.com and obtain google account creds
  5. login to google account and enable sync
  6. (optional at this time) setup any extension configuration needed that results from logging in to google account/turning on sync
  7. ensure the following extensions are installed:
    1. vimium
    2. bitwarden
    3. pushover

passwords/bitwarden

  1. disable chrome password saving/autofill (actually this is done via settings sync by google login) (so only need to set it if not already set in your settings)
  2. set bitwarden extension to use pwvault.turnsys.com
  3. login to bitwarden via extension
  4. set vault to not lock ever (balance security/convenience (with locked workstation and using pin+yubi to unlock workstation)
  5. set match selection to host
  6. set auto fill on page load

web apps

  1. login to discord.com
  2. login to office.com

zsh

  • Use oh-my-zsh
  • Use powerlevel10k
  • see the rcfiles directory for my setup. code is docs here...

konsole setup

settings -> edit current profile ->

apperance (set to breeze)

font (set to menlo for powerline)

mouse

copy/paste copy on select paste from clipboard (default is paste from selection) un-set copy text as html

settings - configure shortcuts next tab ctrl+tab previous shifttab ctrl+tab

xfce tweaks

  • Set focus follows mouse (settings/window manager/focus)
  • (dark mode)? (only works for gtk apps)
  • need to set other apps individually to dark mode

bluetooth issues

run rpi-update or the keyboard will repeat (key stuck) frequently

More advanced customization and configuration required

VsCode

fenix appears to include it in the default image, but it doesn't launch from the menu and shell says code not found. Search for code and it will pull up an entry with VsCode logo labeled as Text Editor. Use that.

See the VsCode guide for tsys at:

https://git.turnsys.com/TSGTechops/docs-techops/src/branch/master/TSYS-DevEnv-VsCode.md

to see how I set it up VsCode for a myriad of tasks.

Activity Tracking/Self Instrumentation

  • activitywatch

Email

  • davmail
  • offlineimap
  • switch mail from (just) thunderbird to thunderbird/(neo)mutt/notmuch/task warrior

Security

  • kleopatra
  • jyubikey ssh key
  • yubikey gpg key

Other programs

  • VIM
  • Seafile sync
  • git optimization/hacks/cool stuff
  • Make magic mouse 2 work on pi

CIO/CISO Stuff

CA

  • xca (build from source)

Security Review

  • scap
  • stig
  • report review

CTO Stuff

docker based dev environment/pipeline

sudo apt-get install libffi-dev libssl-dev
sudo apt install python3-dev
sudo apt-get install -y python3 python3-pip

Vendor/Supply chain/depdency development

  • openwrt
  • openmct
  • raspi
  • arduino
  • freedombox
  • serval
  • genode

SDLC

  • metasploit

Tooling development

  • jupyter

Misc

Workstation details - x86-64 vm

Used for things that don't run on raspi:

VM Specifications

  • Operating System: Ubuntu Server 20.04 with xfce/xrdp
  • Hardware: KVM 4gb ram