Friday, May 28, 2021

Create rpm and deb using fpm

Create rpm and deb using fpm 


fpm -s dir -t rpm -n unbound-exporter -v 1.0 --prefix /usr/bin unbound_exporter 
fpm -s dir -t rpm -n unbound-exporter -v 1.0 --prefix /usr/bin unbound_exporter

Thursday, May 7, 2020

Splunk Intermediate

Splunk Tutorial
---------------

https://docs.splunk.com/Documentation/Splunk/latest/Search/ViewsearchjobpropertieswiththeJobInspector


Booleans
AND OR NOT

Fields
--
status=400
status=50*
status!=300

sourcetype=access_combined | fields clientip, action

Table
--
sourcetype=access_combined | fields clientip, action

Rename
--
sourcetype=access_combined | rename clientip as "userip"

Dedup
--
sourcetype=access_combined | dedup clients

sort cmonnad
lookup command

Module 2
--------
Field name are case sensitive
Field values are not case sensitive
Fiend values from a lookup are case sensitive by default
Booleans oparator are case sensitive

time - index - source - host - sourcetype

fast mode - performance
verbose mode - completness
smart mode - combination of fast and verbose mode

Module 3 - Commands for Visualization
-------------------------------------
chart command
--
over - X axis
any stats function can be applied to the chart command

index=web sourcetype=access_combined status>299 | chart count over status
index=web sourcetype=access_combined status>299 | chart count over status by host
index=web sourcetype=access_combined status>299 | chart count by status,host
index=web sourcetype=access_combined status>299 | chart count over host by product_name
index=web sourcetype=access_combined status>299 | chart count over host by product_name usenull=f
index=web sourcetype=access_combined status>299 | chart count over host by product_name useother=f
index=web sourcetype=access_combined status>299 | chart count over host by product_name limit=5
index=web sourcetype=access_combined status>299 | chart count over host by product_name limit=0

Timechart command
-----------------
index=sales sourcetype=vendor_sales | timechart count
index=sales sourcetype=vendor_sales | timechart sum(price) by product_name
index=sales sourcetype=vendor_sales | timechart sum(price) by product_name limit=5
index=sales sourcetype=vendor_sales | timechart span=12hr sum(price) by product_name limit=0

Timewrap Command
----------------
index=sales sourcetype=vendor_sales product_name="Dream Crusher"| timechart span=1d sum(price) by product_name | timewrap 7d

index=sales sourcetype=vendor_sales product_name="Dream Crusher"| timechart span=1d sum(price) by product_name | timewrap 7d
|rename _time as Day | eval Day = strftime(Day,"%A")

Visualization Examples,
----------------------
Line Graph
Formation Option
Chart Overlay
Area Chart
Column Chart
Bar Graph
Pie Chart
Scatter Chart
Bubble Chart
Trellis Layout


https://docs.splunk.com/Documentation/Splunk/latest/AdvancedDev/CustomVizDevOverview


Module 4 - Advanced Visualizations
----------------------------------
Use Geographical Info

iplocation command
--
index=security sourcetype=linux_secure action=success src_ip!=10.* | iplocation src_ip

Geostats Command
--
index=sales sourcetype=vendor_sales | geostats latfield=VendorLatitude longfield=VendorLangitude count by product_name globallimit=4
index=security sourcetype=linux_secure action=success src_ip!=10.* | iplocation src_ip | geostats latfield=lat longfield=lon count

Choropleth Map
--
.kmz - Keyhold Markup Language File
Geom command - Adds field with geographical data structures mathing polygons on map.
--
index=sales sourcetype=vendor_sales VendorID>=5000 AND VendorID<=5055 | stats count as Sales by VendorCountry
|geom geo_countries featureidField=VendorCountry

Single Value Visualization
--
index=web sourcetype=access_combined action=purchase | stats sum(price) as total
index=web sourcetype=access_combined action=purchase | timechart sum(price)
index=web sourcetype=access_combined action=purchase | stats sum(price) as total | gauge total 0 30000 600000 700000

Trendline Command - Computes moving avarages of field values.
--
Trendtype:
simple moving average
exponential moving avaerage
weighted moving avarage

index=web sourcetype=access_combined action=purchase status=200 | timechart sum2(price) as sales | trendline wma2(sales) as trendline

Field Formation

Addtotals Command
--
index=web sourcetype=access_combined file=*| chart sum(bytes) over host by file | addtotals
index=web sourcetype=access_combined file=*| chart sum(bytes) over host by file | addtotals col=true label="Total"
index=web sourcetype=access_combined file=*| chart sum(bytes) over host by file | addtotals col=true label="Total" labelfiedl="host" row=false

http://docs.splunk.com/Documentation/Splunk/latest/AdvancedDev/CustomVizDevOverview

Module 5 - Filtering and Formatting
---------------------------------
Eval command
--
- arthmetic,concatination,boolean valuse supported
- results can be written to new field or replace exising field
- Newly created field values are case sensitive

sourcetype=cisco_wsa_squid s_hostname=* | stats values(s_hostname) by cs_username
sourcetype=cisco_wsa_squid s_hostname=* | stats values(s_bytes) as Bytes by Usage
sourcetype=cisco_wsa_squid s_hostname=* | stats values(s_bytes) as Bytes by Usage | eval bandwidth= Bytes/1024/1024
sourcetype=cisco_wsa_squid s_hostname=* | stats values(s_bytes) as Bytes by Usage | eval bandwidth= round(Bytes/1024/1024,2)
|sort -bandwidth | rename bandwidth as "Bandwidth(MB)" | fields - Bytes


Eval Mathematical Functions
--
index=web sourcetype=access_c* product_name=* action=purchase | stats sum(price) as total_list_price,sum(sale_price) as total_sale_price by product_name
| eval discount= round(((total_list_price - total_sale_price) / total_list_price)*100) | sort - discount
| eval discount = discount."%"


Eval Convert Values
--
Tostring Function - convert numerical values to strings. (cannot sort)
--
index=web sourcetype=access_c* product_name=* action=purchase | stats sum(price) as total_list_price,sum(sale_price) as total_sale_price by product_name
| eval total_list_price = "$" + tostring(total_list_price,"commas")



Fieldformat command - Format values without changing characteristics of underlying values.(can abot to sort)
--
index=web sourcetype=access_c* product_name=* action=purchase | stats sum(price) as total_list_price,sum(sale_price) as total_sale_price by product_name
| eval total_list_price = "$" + tostring(total_list_price,"commas")
| fieldformat total_sale_price = "$"+ tostring(total_list_price,"commas")

Data in the index does not change.

Multiple eval commands
--
index=web sourcetype=access_c* product_name=* action=purchase | stats sum(price) as total_list_price,sum(sale_price) as total_sale_price by product_name
| eval current_discount = round(((list_price - sale_price)/list_price)*100)
| eval new_discount = (current_discount - 5)
| eval new_sale_price = list_price - (list_price * (new_discount/100))
| eval price_change_revenue = (new_sale_price - sale_price)

Eval Command IF Function
--
index=sales sourcetype=vendor_sales
| eval SalesTerritory = if(VendorID < 4000,"North America","Rest of the World")
| stats sum(price) as TotalRevenue by SalesTerritory

Eval Case Function
--
index=web sourcetype=access_combined
| eval httpCategory=case(status>=200 AND stats<300,"Success",status>=300 AND status<400,"Redirect",
status>=400 AND status<500,"Client Error",status>=500,"Server Error",true(),"Something Other")

Eval with Stats
--
index=web sourcetype=access_combined
| stats count(eval(status<300)) as "Success",count(eval(status>=400 AND status<500)) as "Clinet Error",
count(eval(status>500)) as "Server Error"

Search command
--
index=network sourcetype=cisco_wsa_squid usage=Violation
| stats count(usage) as Visits by cs_username | search Visits > 1

Where Commands
--
index=network sourcetype=cisco_wsa_squid
| stats count(eval(usage="Personal")) as Personal,count(eval(usage="Business")) as Business by username
| where Personal > Business | where username!="sie" | sort -Personal

Eval/Where tips
---
_ char to match one
% char for the wildcard

index=web sourcetype=access_combined action=purchase | stats count by product_name
| where product_name like "Worl%"

null and isnotnull
--
index=sales sourcetype=vendor_sales | timechart sum(price) as sales | where isnull(sales)
index=sales sourcetype=vendor_sales | timechart sum(price) as sales | where isnotnull(sales)

Fillnull Command
--
index=sales sourcetype=vendor_sales | chart sum(price) over product_name by VendorCountry
| fillnull value="Nothing here"

Module 6 - Correlating Events
-----------------------------

Transaction Overview
Transaction command
---
index=web sourcetype=access_combined
| transaction clientip
| table clientip,action,product_name

Transaction Definitions
---
maxspan - Allows setting of maximum total time between earliest and latest events.
maxpause - Allowed maximum total time between events.
startswith - Allows forming transaction starting with specified {terms, field values, evaluations}
endswith - Allows forming transaction ending with specified {terms, field values, evaluations}

http://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Transaction

index=web sourcetype=access_combined
| transaction clientip
 startswith="addtocard" endswith="purchase"
| table clientip,action,product_name

Investigate with Transaction
---
index=network sourcetype=cisco_esa
| transaction mid dcid icid
| search REJECT

Transaction vs Stats
---
transaction
 - Use to see events correlated togather.
 - Use when events need to be grouped on start and end values.
stats
 - Use to see results of a calculation.
 - Use when events need to be grouped on a field value.

index=web sourcetype=access_combined
| transaction  clientip startswith=action="addtocart" endswith=action="purchase"
| table clientip, JSESSIONID, product_name, action, duration, eventcount, price

(index=network sourcetype=cisco_wsa_squid) OR
(index=web sourcetype=access_combined) status>399
| fields sourcetype, status
| transaction status maxspan=5m
| search sourcetype=access_combined AND sourcetype=cisco_wsa_squid
| timechart count by status
| addtotals
| search Total>4



Module 7 - Knowledge Objects
----------------------------
Naming conversion - {Group,Type,Platform,Categor,Time,Description}
example - OPS_WFA_Network_Security_na_IPwhoisAction
http://docs.splunk.com/Documentation/Splunk/latest/Knowledge/Developnamingconventionsforknowledgeobjecttitles

Module 8 - Field Extractions
----------------------------
Field extraction with Regex and Delimiter

Module 9 - Aliases and Calc Fields
----------------------------------
Field Aliases
--

Calculated Fields
 - must be based on extracted or discovered fields
 - Fiedls from a Lookup table or generated from a search command cannot be used


Module 10 - Tags and Event Types
--------------------------------
Tags
- Alllow you to designate descriptive names for key-value pairs
- enable you to search for events that contain particular field value

Tag values are case sensitive

Event Types
-Categorize events based on search strings
-Use tags to organize
-"eventtype" field within a search string
- Time range NOT available

Saved Reports
-Fixed search criteria
-Time range & formatting needed
-share with splunk users
-add to dashborads

(index=web sourcetype=access_combined) OR (index=network sourcetype=cisco_wsa_squid) status> 500
eventtype="web_error"


Module 11 - Macros
------------------
Macros
-Reusable search strings or porttions of search strings
-Useful for frequent searches with complicated search syntax

- Store entire search strings
- Time range independent
- Pass arguments to the search

https://docs.splunk.com/Documentation/CIM/4.15.0/User/Web

index=sales sourcetype=vendor_sales | stats sum(sale_price) as total_sales by Vendor
| eval total_sales = "$" + tostring(round(tatal_sales,2),"commas")

`ConvertUSD` = eval total_sales = "$" + tostring(round(tatal_sales,2),"commas")

Ctrl + Shift + E = Search Explansion Window

index=sales sourcetype=vendor_sales VendorCountry=Germany OR VendorCountry=France OR VendorCountry=Italy
| stats sum(price) as USD by product_name
| eval USD = "$"+tostring(round(USD,2),"commas")

`Europe_sales`

index=sales sourcetype=vendor_sales VendorCountry=Germany OR VendorCountry=France OR VendorCountry=Italy
| stats sum(price) as USD by product_name
| `Europe_sales`

sourcetype=vendor_sales VendorCountry=Germany OR VendorCountry=France OR VendorCountry=Italy
| stats sum(price) as USD by product_name
| eval euro = "€" + tostring(round(USD*0.79,2), "commas"), USD = "$" +tostring(USD, "commas")

stats sum(price) as USD by product_name
| eval $currency$="$symbol$".tostring(round(USD*$rate$,2),"commas"),USD="$" +tostring(USD,"commas")


index= sales sourcetype=vendor_sales VendorCountry=Germany OR VendorCountry=France OR VendorCountry=Italy
|  `convert_sales(euro,€,.79)`

index=sales sourcetype=vendor_sales VendorCountry="United Kingdom"
| `convert_sales(GBP,£,.64)`

index=sales sourcetype=vendor_sales VendorCountry="India"
| `convert_sales(INR,₹,68)`



Module 12 - Workflow Actions
----------------------------
Create links to interact with external resources or narrow search.
GET and POST


Module 13 - Data Models
-----------------------

Data Models consist of: Events searches Transactions
Data Model Framework - Pivot is interface to the data

strftime(_time,"%m-%d %A")


Module 14 - CIM Common Information Model
-----------------------------------------
Maps all data to defined method
Normalizes to common language
Data can be normalized at index time or search time
CIM schema shoud be used for: Field extractions, Event types , Aliases, Tags
Knowledge objects can be shared globally across all apps.

Splunk Admin Notes

Splunk admin
------------

indexer
search head
forwarder
deployment server
licence master
cluster master

Indexer
-
Indexing
parsing
searching

Hardware requirement for indexer
--
2cpu 6x2Ghz cores each
12GB RAM
1GbE NIC
64-bit linux
800 IOPS

Search Head Hardware requirement
--
4cpu 4x2Ghz cores each
12GB RAM
1GbE NIC
2x10K RPM 300GB
SAS drives - RAID 1

Forwaders requirement
--
1cpu 2x1.5GHz Cores
1GB RAM

Permission
--
splunk user - No root user
OR windows user

Time sync is must
----
ntp

splunk process
--
splunkd

Ports
--
8089 - splunkd
search commands
licence and deployment servers
REST API
Command line interface

8000 - splunk web

8065 - Application Server(not expose outside)
8191 - KVstore
9997 - Forwaders

Splunk install
--
untar splunk to /opt
#tar zxvf splunk-install.tgz -C /opt
#cd /opt/splunk/bin
#./splunk start --accept-licence
#./splunlk enable boot-start -user splunk

http://docs.splunk.com/Documentation/Splunk/latest/Troubleshooting/ulimitErrors
http://docs.splunk.com/Documentation/Splunk/latest/ReleaseNotes/SplunkandTHP
http://docs.splunk.com/Documentation/Splunk/latest/Security/SecureSplunkWebusingasignedcertificate

License
--
Data Not Metered:
- Replicated in cluster
- Summary indexed
- Splunk internal logs
- Metadata files

.conf files
--
etc
-system
--search
--launcher
---default default.conf
---local
-apps
-users

.conf
- system settings
- Data input configurations
- Authentication,authorization info
- INdex mappings and settings
- Deployment,cluster configurations
- Knowkedge objects
- Saved searches

props.conf
allows setting of process properties:
-linet-breaking
-character encoding
-time stamp recognition
-event segmentation
-Automated host,source type mathing overrides
-Search-time field extraction definitions

transforms.conf
Allow data tranformation configuration:
-Anonymizing sensitive data
-Regex-based host & sourcetype overrides
-Routing events to chosen indexs
-Creating index-time field extraction
-Multiple value extraction on same field
-Lookup table setup for external sources



Module 5 - Indexes
------------------

Indexes are repositories of data stored in flat files.

summary - used by summary indexing
_internal - splunk internal logs and metrics
_audit - Stores audit trails and optional andit information
_introspection - Splunk system perfomance and resouce usage
_thefishbucket - Checkpoint info for file monitoring inputs

var -lib -splunk -dafaultdb - db(hot)warm,colddb(cold),thaweddb(thawed)

https://docs.splunk.com/Documentation/Splunk/latest/Indexer/Automatearchiving
https://docs.splunk.com/Documentation/Splunk/latest/Indexer/Restorearchiveddata
https://docs.splunk.com/Documentation/Splunk/latest/admin/Indexesconf#indexes.conf.spec


cd /opt/splunk/etc/apps/search/local
indexes.conf

Module 6 - User Administration
------------------------------


Module 7
--
upload files for index

Monitor Input Option
--

Universal Forwarder Input Option
--------------------------------
Receiver 9997

Forwaders
--
cd /opt/splunkforwarder/bin
./splunk start --accept-licence
./splunk add forward-server ip:9997 -auth
http://docs.splunk.com/Documentation/Splunk/6.2.1/Updating/Planadeployment


Heavy Forwarder
---------------
A Heavy forwarder parses the data and forwards to indexer for indexing.
Smaller footprint then Enterprise
Can not do distributed searches

http://docs.splunk.com/Documentation/Splunk/latest/Forwarding/Deployaheavyforwarder

http://docs.splunk.com/Documentation/Splunk/latest/Security/Aboutsecuringdatafromforwarders


Module 8 - Grow Deployment
--------------------------
Distributed search -->search peers


outputs.conf
----------

[tcpout]
defaultGroup = default-autolb-group

[tcpout:default-autolb-group]
server = IP:9997, IP:9997

http://docs.splunk.com/Documentation/Splunk/latest/DMC/DMCoverview

Splunkbase - app repositories







Tuesday, December 17, 2019

Kubernetes Cluster



Kubernetes cluster


Creating clusters
#To setup the project and cluster
gcloud config set project peaceful-signer-194319
gcloud config set compute/zone us-central1-a

#get the Kubernetes Engine server config
gcloud container get-server-config --zone us-central1-a

#get only the default Kubernetes Engine server version
gcloud container get-server-config --zone us-central1-a pipe grep defaultClusterVersion:

#create (start) a cluster on Google Cloud Platform with 2 nodes and cluster version 1.9.2
gcloud container clusters create mycluster --num-nodes=2 --cluster-version 1.9.2-gke.1

#to get the cluster credentials and configure the kubectl command-line tool
gcloud container clusters get-credentials mycluster

#to start Minikune on windows
minikube start --kubernetes-version v1.8.0 --VM-driver hyperv --hyperv-virtual-switch "Primary Virtual Switch"





Resizing a cluster
#setup the project and cluster
gcloud config set project [PROJECT_ID]
gcloud config set compute/zone [COMPUTE/ZONE]

gcloud config set project peaceful-signer-194319
gcloud config set compute/zone us-central1-a

#to create(start) a cluster on GCP with 2 nodes using the default cluster version
gcloud container clusters create mycluster --num-nodes=2

#List existing clusters in the default zone for running containers
gcloud container clusters list

#Resize the cluster, We will change it to one single Node in the Node pool
gcloud container clusters resize mycluster --size 1

#To list the GCP K8 cluster
gcloud container clusters list


Stopping a Cluster

#Although you can’t really ‘stop’ a cluster on GCP you can ensure that no Pods are going
# to be deployed by setting the number of nodes to 0
gcloud container clusters resize mycluster –size=0

#To list the GCP K8 cluster
gcloud container clusters list

#for Minikube on windows
kubectl version
kubectl get all
minikube stop

Deleting a Cluster

# Delete an existing cluster
# --zone overrides the default COMPUTE/ZONE setting
# --async doesn’t wait for the operation to complete returns to the prompt immediately
# there are also GCLOUD WIDE FLAGS like –project, --quiet, --account and so on
glcoud container clusters delete mycluster –zone=us-centrall-a –async

#for Minikube
minikube stop
minikube delete

#rm -rf ~/.minikube for Linux

Upgrading a Cluster
# list existing clusters in the default zone
gcloud container clusters list

# to fine the supported Kubernetes master and node versions for upgrades
gcloud container get-server-config

#To upgrade the cluster version
gcloud container clusters upgrade mycluster --master --cluster-version 1.9.2-gke.1

#To upgrade nodes
gcloud container clusters upgrade mycluster

#minikube upgrade version
minikube get-k8-versions
minikube start --kubernetes-version v1.8.0 --VM-driver hyperv --hyperv-virtual-switch "Primary Virtual Switch"
minikube stop
minikube start --kubernetes-version v1.9.0 --VM-driver hyperv --hyperv-virtual-switch "Primary Virtual Switch"
kubectl version | sls "Server Version:"

Understanding Kubeconfig

#To setup the project and cluster
gcloud config set project peaceful-signer-194319
gcloud config set compute/zone us-central1-a

#get the Kubernetes Engine server config
gcloud container get-server-config --zone us-central1-a

#create (start) a cluster on Google Cloud Platform with 2 nodes and cluster version 1.9.2
gcloud container clusters create mycluster --num-nodes=2
gcloud container clusters create mycluster-dev --num-nodes=2

# here is where the default is kubeconfig file is located and it’s names config
ls ~/.kube

# if you want to look at the file, you can ‘cat ~/.kube/config’, just be aware that your
# certificate-authority-data will be in full view if you do
kubectl config view

#kubectl config SUBCOMMAND
kubectl config –kubeconfig=d-config get-clusters
kubectl config –kubeconfig=p-config get-clusters
kubectl config get-clusters
#kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#config


Sharing a Cluster

kubectl get namespaces
# Start minikube
minikube start --kubernetes-version v1.9.0 --VM-driver hyperv --hyperv-virtual-switch "Primary Virtual Switch"

#alter your config located at ~/.kube/config

#create two new namespaces
kubectl create -f C:\os_kube\course4\121653\ns-dev.yaml
kubectl create -f C:\os_kube\course4\121653\ns-test.yaml
kubectl get namespaces –show-labels

#set some contexts in the configuration for the cluster
kubectl config set-context dev --namespace=dev --cluster=minikube –user=Joe-dev
kubectl config set-context test --namespace=test --cluster=minikube –user=Joe-test
kubectl config use-context dev
kubectl config current-context

#create a deployment in the dev context
kubectl run nginx --image=nginx –replicas=3

#now switch contexts to test, and see whether we can see the deployment
kubectl config use-context test
kubectl get all




ns-dev.yaml
------------
apiVersion:v1
kind: Namespace
metadata:
name: dev
labels:
name: dev

ns-test.yaml
------------
apiVersion:v1
kind: Namespace
metadata:
name: test
labels:
name: test


Authentication Clusters

#set project and cluster
gcloud config set project peaceful-signer-194319
gcloud config set compute/zone us-central1-a

# On GCP add a user with IAM and give user the role Kubernetes Engine Viewer
# let’s look at the iam policy for this project
gcloud projects get-iam-policy peaceful-signer-194319 --format json > iam.json


Creating a Alpha Cluster
# Alpha cluster are not for productions
# they can’t upgraded and automatically deleted after 30 days

# To create alpha cluster
gcloud alpha container clusters create alphacluster --enable-kubernetes-alpha –cluster-version=1.9.2-gke.1

# you can get the authentication credentials for the cluster
gcloud alpha container clusters get-credentials alphacluster

#later to list the cluster and see when the alpha clusters expire
gcloud alpha container clusters list

Using Cluster Autoscaler

#set project and cluster
gcloud config set project peaceful-signer-194319
gcloud config set compute/zone us-central1-a

# To create an autoscaling cluster use ‘cloud container clusters create’
# ‘--enable autoscaling’ enables autoscalin
# --min-nodes and --max-nodes are the minimum number of nodes and the maximum number of nodes
gcloud container clusters create mycluster --num-nodes 1 --enable-autoscaling --min-nodes 1 --max-nodes 5



#now let’s deploy to the deployment, removing all but 1 pod
# then check the deplyment and nodes
# you’ll see the nodes scale back in a few minues
kubectl create -f deployment.yaml
kubectl get nodes
kubectl get deploy




#now let’s apply a change to the deployment, removing all but 1 pod
# then check the deplyment and nodes
# you’ll see the nodes scale back in a few minues
kubectl apply -f alterdeployment.yaml
kubectl get nodes
kubectl get deploy

# To disable autoscaling for a specific node pool
# ‘--no--enable-autoscaling’ tell the cluster to disable autoscaling
# this set the cluster size at its current default node pool size. And can still be manually updated.
gcloud container clusters update mycluster –no-enable-autoscaling –node-pool default-pool

Friday, December 6, 2019

Vlookup

=VLOOKUP(G2,Sheet2!B:H,7,TRUE)
=VLOOKUP(A2,Sheet2!B:B,1,0)

=VLOOKUP(currentsheetcolumn,Sheet2!B:H,7,0)

Thursday, November 21, 2019

SSH Port Forward

nohup socat TCP-LISTEN:3306,fork TCP:172.31.65.183:3306 &

Trace the Linux process

 o Start a tcpdump on the affected systems:

  -#tcpdump -s 0 -n  host <ipaddress> > -w /tmp/$(hostname)-$(date +"%Y-%m-%d-%H-%M-%S").pcap &

 o Gather an strace of a command that easily reproduces this issue, such as a 'cd' or ls of the directory in question:

  -#strace -fvttTyyx -s 1024 -o /tmp/$(hostname)-strace.out <insert command here to reproduce the issue>

 o Once the strace returns an error, stop the tcpdump:

  -#killall tcpdump

Create rpm and deb using fpm

Create rpm and deb using fpm  fpm -s dir -t rpm -n unbound-exporter -v 1.0 --prefix /usr/bin unbound_exporter   fpm -s dir -t rpm -n unbound...