Pivotal/BOSH

From r00tedvw.com wiki
(Difference between revisions)
Jump to: navigation, search
 
(19 intermediate revisions by one user not shown)
Line 1: Line 1:
[[Pivotal/BOSH|BOSH]] | [[Pivotal/BOSH/CLI|BOSH CLI]]
+
[[Pivotal/BOSH|BOSH]] | [[Pivotal/BOSH/CLI|BOSH CLI]] | [[Pivotal/BOSH/Quick_Reference|Quick Reference]]
 +
=Installing on Mac OSX=
 +
The easiest way is to use <code>brew</code>
 +
<nowiki>~$ brew install cloudfoundry/tap/bosh-cli</nowiki>
 +
Alternatively you can download the release and create a symlink to <b><code>/usr/local/bin/bosh</code></b>
 
=Installing on Ubuntu=
 
=Installing on Ubuntu=
 
I'm installing BOSH on an Ubuntu 16.04 LTS server vm running on top an ESXi 6.0u2 host.<br>
 
I'm installing BOSH on an Ubuntu 16.04 LTS server vm running on top an ESXi 6.0u2 host.<br>
Line 56: Line 60:
 
All other options listed above will work fine with the default values.  If needed, they will be automatically created if not present.
 
All other options listed above will work fine with the default values.  If needed, they will be automatically created if not present.
  
 +
===lab example===
 +
An example of the install that I did within the lab environment:
 +
<nowiki>~$ bosh create-env bosh-deployment/bosh.yml \
 +
    --state=state.json \
 +
    --vars-store=creds.yml \
 +
    -o bosh-deployment/vsphere/cpi.yml \
 +
    -o bosh-deployment/vsphere/resource-pool.yml \
 +
    -v director_name=bosh-director-10 \
 +
    -v internal_cidr=10.193.10.0/24 \
 +
    -v internal_gw=10.193.10.1 \
 +
    -v internal_ip=10.193.10.6 \
 +
    -v network_name="Lab-env10" \
 +
    -v vcenter_dc=Datacenter \
 +
    -v vcenter_ds=LUN01 \
 +
    -v vcenter_ip=vcsa-01.haas-59.pez.pivotal.io \
 +
 +
    -v vcenter_password=password \
 +
    -v vcenter_templates=pcf_env10_templates \
 +
    -v vcenter_vms=pcf_env10_vms \
 +
    -v vcenter_disks=pcf_env10_disks \
 +
    -v vcenter_cluster=Cluster \
 +
    -v vcenter_rp=RP10</nowiki>
  
==SSH into BOSH Director==
+
=SSH into BOSH Director=
 
In order to SSH into the BOSH director, it needs to have been setup with a passwordless user during creation.  This is generally defined through the <code>jumpbox-user.yml</code> file during deployment.  The below methods assume this was in place during deployment.
 
In order to SSH into the BOSH director, it needs to have been setup with a passwordless user during creation.  This is generally defined through the <code>jumpbox-user.yml</code> file during deployment.  The below methods assume this was in place during deployment.
 
===Manual===
 
===Manual===
 
Search <code>creds.yml</code> for the jumpbox private RSA key.  Copy everything between <code>-----BEGIN RSA PRIVATE KEY-----</code> and <code>-----END RSA PRIVATE KEY-----</code>, including those (2) lines, into a new file called <code>jumpbox.pub</code>.<br>
 
Search <code>creds.yml</code> for the jumpbox private RSA key.  Copy everything between <code>-----BEGIN RSA PRIVATE KEY-----</code> and <code>-----END RSA PRIVATE KEY-----</code>, including those (2) lines, into a new file called <code>jumpbox.pub</code>.<br>
 
Make sure that there are no spaces in front of any of the lines. I used the following to remove them:
 
Make sure that there are no spaces in front of any of the lines. I used the following to remove them:
  <nowiki~$ sed 's/    //g' jumpbox.pub</nowiki>
+
  <nowiki>~$ sed 's/    //g' jumpbox.pub</nowiki>
 +
If the above looks right, then lets write it.
 +
<nowiki>LINUX: ~$ sed -i 's/    //g' jumpbox.pub
 +
MAC: ~$ sed -i '' 's/    //g' jumpbox.pub</nowiki>
 
Next, set the correct permissions.
 
Next, set the correct permissions.
 
  <nowiki>~$ sudo chmod 600 ./jumpbox.pub</nowiki>
 
  <nowiki>~$ sudo chmod 600 ./jumpbox.pub</nowiki>
 
Finally we should be able to ssh into our BOSH director
 
Finally we should be able to ssh into our BOSH director
  <nowiki>~$ ssh
+
  <nowiki>~$ ssh jumpbox@<director IP> -i <jumpbox rsa key>
 +
ex. ~$ ssh [email protected] -i ~/jumpbox.pub</nowiki>
 +
===The BOSH Way===
 +
Extract the RSA Key for the jumpbox user
 +
<nowiki>~$ bosh int --path /jumpbox_ssh/private_key <path to creds.yml>
 +
ex. bosh int --path /jumpbox_ssh/private_key ~/Git/workspace/creds.yml</nowiki>
 +
If that looks good, redirect to a file
 +
<nowiki>~$ bosh int --path /jumpbox_ssh/private_key ~/Git/workspace/creds.yml > ~/jumpbox.pub</nowiki>
 +
Finally we should be able to ssh into our BOSH director
 +
<nowiki>~$ ssh jumpbox@<director IP> -i <jumpbox rsa key>
 +
ex. ~$ ssh [email protected] -i ~/jumpbox.pub</nowiki>
 +
 
 +
===vSphere===
 +
sets up an alias name for the environment from ops man director on vsphere. You may need to ssh into the ops manager first.<br/>
 +
<code>bosh alias-env MY-ENV -e DIRECTOR-IP-ADDRESS --ca-cert /var/tempest/workspaces/default/root_ca_certificate</code>
 +
<nowiki>~$ bosh alias-env myenv -e 10.193.81.11 --ca-cert /var/tempest/workspaces/default/root_ca_certificate</nowiki>
 +
'''Note:''' you may get an error about an invalid token if you are already logged in. log out first using <code>bosh log-out -e alias</code><br/>
 +
<br/>
 +
log in to the director. alias is l<br/>
 +
<code>bosh -e <env alias> log-in</code>
 +
<nowiki>~$ bosh -e my-env l
 +
User (): admin
 +
Password ():</nowiki>
 +
<h3>for bosh cli, use the director credentials found in the bosh director tile</h3>
 +
 
 +
 
 +
====decrypt yml files====
 +
In vsphere some of the configurations and credentials are stored within encrypted yml files.  In order to decrypt these, ssh into the Ops Manager and run these commands with the admin passphrase
 +
<nowiki>~$ sudo -u tempest-web RAILS_ENV=production /home/tempest-web/tempest/web/scripts/decrypt /var/tempest/workspaces/default/actual-installation.yml /tmp/actual-installation.yml
 +
~$ sudo -u tempest-web RAILS_ENV=production /home/tempest-web/tempest/web/scripts/decrypt /var/tempest/workspaces/default/installation.yml /tmp/installation.yml</nowiki>
 +
 
 +
=Basics=
 +
==monit==
 +
Make sure you are root by either using <code>sudo su -</code> or <code>sudo -i</code><br>
 +
show running processes related to BOSH
 +
<nowiki>~$ monit summary
 +
The Monit daemon 5.2.5 uptime: 23h 45m
 +
 
 +
Process 'nats'                      running
 +
Process 'postgres'                  running
 +
Process 'blobstore_nginx'          running
 +
Process 'director'                  running
 +
Process 'worker_1'                  running
 +
Process 'worker_2'                  running
 +
Process 'worker_3'                  running
 +
Process 'worker_4'                  running
 +
Process 'director_scheduler'        running
 +
Process 'director_sync_dns'        running
 +
Process 'director_nginx'            running
 +
Process 'health_monitor'            running
 +
Process 'warden_cpi'                running
 +
Process 'garden'                    running
 +
Process 'uaa'                      running
 +
Process 'credhub'                  running
 +
System 'system_localhost'          running
 +
</nowiki>
 +
 
 +
==disk structure==
 +
<nowiki>~$ lsblk
 +
NAME  MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
 +
sda      8:0    0    3G  0 disk                  <<<<<  root disk
 +
└─sda1  8:1    0    3G  0 part /
 +
sdb      8:16  0    16G  0 disk                  <<<<<  ephemeral disk
 +
├─sdb1  8:17  0  3.9G  0 part [SWAP]
 +
└─sdb2  8:18  0  12.1G  0 part /var/vcap/data
 +
sdc      8:32  0    64G  0 disk                  <<<<<  persistent disk
 +
└─sdc1  8:33  0    64G  0 part /var/vcap/store
 +
sr0    11:0    1    48K  0 rom</nowiki>
 +
 
 +
 
 +
==folder structure==
 +
<nowiki>~$ ls -la /var/vcap/
 +
total 40
 +
drwxr-xr-x 10 root root 4096 Jul  9 16:23 .
 +
drwxr-xr-x 12 root root 4096 Jun 18 13:12 ..
 +
drwx------  7 root root 4096 Jul  9 16:23 bosh              <<<<< contain related files, binaries, settings, etc to the bosh agent running on the system.
 +
drwxr-xr-x 22 root root 4096 Jul  9 16:23 data                <<<<< mounted from the ephemeral disk.  contains all the ephemeral data that processes related to bosh use.
 +
drwxr-xr-x  2 root root 4096 Jul  9 16:23 instance          <<<<< contains metadata about the current bosh instance
 +
drwxr-x---  2 root vcap 4096 Jul  9 16:23 jobs          <<<<< jobs related to the bosh director
 +
drwxr-xr-x  3 root root 4096 Jun 18 17:51 micro_bosh     
 +
drwxr-xr-x  5 root root 4096 Jul  9 16:24 monit    <<<<< monit related files
 +
drwxr-xr-x  2 root vcap 4096 Jul  9 16:23 packages    <<<<< releases, packages, and dependencies related to bosh and the installed applications
 +
drwxr-xr-x  8 root root 4096 Jul  9 16:35 store    <<<<< persistent storage for jobs
 +
lrwxrwxrwx  1 root root  18 Jul  9 16:22 sys -> /var/vcap/data/sys</nowiki> .  <<<<< logs for bosh and processes.
 +
 
 +
 
 +
=Installing on CentOS with virtualbox=
 +
Installing on CentOS 7.5.1804x64

Latest revision as of 09:20, 26 October 2018

BOSH | BOSH CLI | Quick Reference

Contents

[edit] Installing on Mac OSX

The easiest way is to use brew

~$ brew install cloudfoundry/tap/bosh-cli

Alternatively you can download the release and create a symlink to /usr/local/bin/bosh

[edit] Installing on Ubuntu

I'm installing BOSH on an Ubuntu 16.04 LTS server vm running on top an ESXi 6.0u2 host.

[edit] BOSH v3 CLI

Start by updating your ubuntu vm.

~$ sudo apt update && sudo apt upgrade -y

Next lets install a few dependencies.

~$ sudo apt install -y build-essential zlibc zlib1g-dev ruby ruby-dev openssl libxslt-dev libxml2-dev libssl-dev libreadline6 libreadline6-dev libyaml-dev libsqlite3-dev sqlite3 git gnupg2 libcurl3

update: added libcurl3 to the dependency list as it was needed for bosh-deployment Download the binary, make it executable, and move it to your path. Verify you have it installed.

~$ curl -Lo ./bosh https://s3.amazonaws.com/bosh-cli-artifacts/bosh-cli-3.0.1-linux-amd64
~$ chmod +x ./bosh
~$ sudo mv ./bosh /usr/local/bin/bosh
~$ bosh -v
version 3.0.1-712bfd7-2018-03-13T23:26:43Z

Succeeded

[edit] BOSH Director

We are going to use the bosh-deployment tool and deploy bosh onto a vcenter environment.

~$ cd /opt/
~$ sudo mkdir bosh-1 && cd bosh-1
~$ sudo git clone https://github.com/cloudfoundry/bosh-deployment bosh-deployment

Now we need to install the director and specify our vcenter variables.

~$ bosh create-env bosh-deployment/bosh.yml \
    --state=state.json \
    --vars-store=creds.yml \
    -o bosh-deployment/vsphere/cpi.yml \
    -v director_name=bosh-1 \
    -v internal_cidr=10.0.0.0/24 \
    -v internal_gw=10.0.0.1 \
    -v internal_ip=10.0.0.6 \
    -v network_name="VM Network" \
    -v vcenter_dc=my-dc \
    -v vcenter_ds=datastore0 \
    -v vcenter_ip=192.168.0.10 \
    -v vcenter_user=root \
    -v vcenter_password=vmware \
    -v vcenter_templates=bosh-1-templates \
    -v vcenter_vms=bosh-1-vms \
    -v vcenter_disks=bosh-1-disks \
    -v vcenter_cluster=cluster1

Of the variables you see above, the following are required and will not be automatically created:

vcenter_dc
this must match the name of your vcenter datacenter
vcenter_ds
this must match the name of your vcenter datastore or be a regex match.
vcenter_ip
The IP of your vcenter server. hostname is not specified as allowed in the BOSH documentation.
vcenter_user
username of account with admin rights.
vcenter_password
password for the above mentioned username
vcenter_cluster
name of the vcenter cluster that your hosts live in. This is required.

All other options listed above will work fine with the default values. If needed, they will be automatically created if not present.

[edit] lab example

An example of the install that I did within the lab environment:

~$ bosh create-env bosh-deployment/bosh.yml \
    --state=state.json \
    --vars-store=creds.yml \
    -o bosh-deployment/vsphere/cpi.yml \
    -o bosh-deployment/vsphere/resource-pool.yml \
    -v director_name=bosh-director-10 \
    -v internal_cidr=10.193.10.0/24 \
    -v internal_gw=10.193.10.1 \
    -v internal_ip=10.193.10.6 \
    -v network_name="Lab-env10" \
    -v vcenter_dc=Datacenter \
    -v vcenter_ds=LUN01 \
    -v vcenter_ip=vcsa-01.haas-59.pez.pivotal.io \
    -v [email protected] \
    -v vcenter_password=password \
    -v vcenter_templates=pcf_env10_templates \
    -v vcenter_vms=pcf_env10_vms \
    -v vcenter_disks=pcf_env10_disks \
    -v vcenter_cluster=Cluster \
    -v vcenter_rp=RP10

[edit] SSH into BOSH Director

In order to SSH into the BOSH director, it needs to have been setup with a passwordless user during creation. This is generally defined through the jumpbox-user.yml file during deployment. The below methods assume this was in place during deployment.

[edit] Manual

Search creds.yml for the jumpbox private RSA key. Copy everything between -----BEGIN RSA PRIVATE KEY----- and -----END RSA PRIVATE KEY-----, including those (2) lines, into a new file called jumpbox.pub.
Make sure that there are no spaces in front of any of the lines. I used the following to remove them:

~$ sed 's/    //g' jumpbox.pub

If the above looks right, then lets write it.

LINUX: ~$ sed -i 's/    //g' jumpbox.pub
MAC: ~$ sed -i '' 's/    //g' jumpbox.pub

Next, set the correct permissions.

~$ sudo chmod 600 ./jumpbox.pub

Finally we should be able to ssh into our BOSH director

~$ ssh jumpbox@<director IP> -i <jumpbox rsa key>
ex. ~$ ssh [email protected] -i ~/jumpbox.pub

[edit] The BOSH Way

Extract the RSA Key for the jumpbox user

~$ bosh int --path /jumpbox_ssh/private_key <path to creds.yml>
ex. bosh int --path /jumpbox_ssh/private_key ~/Git/workspace/creds.yml

If that looks good, redirect to a file

~$ bosh int --path /jumpbox_ssh/private_key ~/Git/workspace/creds.yml > ~/jumpbox.pub

Finally we should be able to ssh into our BOSH director

~$ ssh jumpbox@<director IP> -i <jumpbox rsa key>
ex. ~$ ssh [email protected] -i ~/jumpbox.pub

[edit] vSphere

sets up an alias name for the environment from ops man director on vsphere. You may need to ssh into the ops manager first.
bosh alias-env MY-ENV -e DIRECTOR-IP-ADDRESS --ca-cert /var/tempest/workspaces/default/root_ca_certificate

~$ bosh alias-env myenv -e 10.193.81.11 --ca-cert /var/tempest/workspaces/default/root_ca_certificate

Note: you may get an error about an invalid token if you are already logged in. log out first using bosh log-out -e alias

log in to the director. alias is l
bosh -e <env alias> log-in

~$ bosh -e my-env l
User (): admin
Password ():

for bosh cli, use the director credentials found in the bosh director tile


[edit] decrypt yml files

In vsphere some of the configurations and credentials are stored within encrypted yml files. In order to decrypt these, ssh into the Ops Manager and run these commands with the admin passphrase

~$ sudo -u tempest-web RAILS_ENV=production /home/tempest-web/tempest/web/scripts/decrypt /var/tempest/workspaces/default/actual-installation.yml /tmp/actual-installation.yml
~$ sudo -u tempest-web RAILS_ENV=production /home/tempest-web/tempest/web/scripts/decrypt /var/tempest/workspaces/default/installation.yml /tmp/installation.yml

[edit] Basics

[edit] monit

Make sure you are root by either using sudo su - or sudo -i
show running processes related to BOSH

~$ monit summary
The Monit daemon 5.2.5 uptime: 23h 45m

Process 'nats'                      running
Process 'postgres'                  running
Process 'blobstore_nginx'           running
Process 'director'                  running
Process 'worker_1'                  running
Process 'worker_2'                  running
Process 'worker_3'                  running
Process 'worker_4'                  running
Process 'director_scheduler'        running
Process 'director_sync_dns'         running
Process 'director_nginx'            running
Process 'health_monitor'            running
Process 'warden_cpi'                running
Process 'garden'                    running
Process 'uaa'                       running
Process 'credhub'                   running
System 'system_localhost'           running

[edit] disk structure

~$ lsblk
NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda      8:0    0     3G  0 disk                   <<<<<  root disk
└─sda1   8:1    0     3G  0 part /
sdb      8:16   0    16G  0 disk                   <<<<<  ephemeral disk
├─sdb1   8:17   0   3.9G  0 part [SWAP]
└─sdb2   8:18   0  12.1G  0 part /var/vcap/data
sdc      8:32   0    64G  0 disk                   <<<<<  persistent disk
└─sdc1   8:33   0    64G  0 part /var/vcap/store
sr0     11:0    1    48K  0 rom


[edit] folder structure

~$ ls -la /var/vcap/
total 40
drwxr-xr-x 10 root root 4096 Jul  9 16:23 .
drwxr-xr-x 12 root root 4096 Jun 18 13:12 ..
drwx------  7 root root 4096 Jul  9 16:23 bosh               <<<<< contain related files, binaries, settings, etc to the bosh agent running on the system.
drwxr-xr-x 22 root root 4096 Jul  9 16:23 data                <<<<< mounted from the ephemeral disk.  contains all the ephemeral data that processes related to bosh use.
drwxr-xr-x  2 root root 4096 Jul  9 16:23 instance          <<<<< contains metadata about the current bosh instance
drwxr-x---  2 root vcap 4096 Jul  9 16:23 jobs           <<<<< jobs related to the bosh director
drwxr-xr-x  3 root root 4096 Jun 18 17:51 micro_bosh       
drwxr-xr-x  5 root root 4096 Jul  9 16:24 monit     <<<<< monit related files
drwxr-xr-x  2 root vcap 4096 Jul  9 16:23 packages     <<<<< releases, packages, and dependencies related to bosh and the installed applications
drwxr-xr-x  8 root root 4096 Jul  9 16:35 store     <<<<< persistent storage for jobs
lrwxrwxrwx  1 root root   18 Jul  9 16:22 sys -> /var/vcap/data/sys .  <<<<< logs for bosh and processes.


[edit] Installing on CentOS with virtualbox

Installing on CentOS 7.5.1804x64

Personal tools
Namespaces

Variants
Actions
Navigation
Mediawiki
Confluence
DevOps Tools
Open Source Products
Ubuntu
Ubuntu 22
Mac OSX
Oracle Linux
AWS
Windows
OpenVPN
Grafana
InfluxDB2
TrueNas
MagicMirror
OwnCloud
Pivotal
osTicket
OTRS
phpBB
WordPress
VmWare ESXI 5.1
Crypto currencies
HTML
CSS
Python
Java Script
PHP
Raspberry Pi
Canvas LMS
Kaltura Media Server
Plex Media Server
MetaSploit
Zoneminder
ShinobiCE
Photoshop CS2
Fortinet
Uploaded
Certifications
General Info
Games
Meal Plans
NC Statutes
Politics
Volkswagen
Covid
NCDMV
Toolbox