Category Archives: Centos

Trends to watch for in 2020:

The Rise of the Data Sorcerer:
Marketing departments: all the same in their struggles. How do I accurately target customers to get my brand message? How do I increase my relevancy? How do I possess the arcane secrets of The Ancient order of the Golden Dawn? Enter the Data Sorcerer. You may have heard of Data Scientist, but this marketing asset comes with a little something extra. One-part hacker, one part math genius, one part ancient magus able to transmute lead into gold, this position will fit right in on your marketing team.

Off the cuff:

“We need to have a mobile play. What platform should we develop for?” The question that’s all the buzz in board rooms these days.

With a rapidly shifting millennial customer base, companies are increasingly having to develop a platform that meets these on the go consumers. Apple has begun sun setting the Apple Watch experience in favor their new gamble, Apple Leather Cuff inspired by the fashion trends of Creed singer Scott Stapp.  “It’s great. It got a 2 inch wide leather strap, plus I can watch Orange is the New Black”.

Arrive in Style:

 First there was the UberX, then for the discerning connoisseur of travel there was the Uber Black. Uber, looking to address how to assist customers in flaunting their wealth, is seeking to expand vehicle options beyond the morass of Hondas Civics with it’s newest foray into the luxury market, the uber Royal.

New to this exciting mode of travel? Four shirtless, Adonis like men show up at your house, carrying a golden plated carriage, lined with red silks. You climb in and they carry you to your destination. “I was looking to make some quick cash on the side, but what really sold me was the app, its super easy to adjust availability ” quoted on of our drivers when we gave it a test.

See and Be Seen

Haven’t heard of Warby Parker? This quirky company, with the slogan, “Like TOMS but for your face,” is a former startup turned eye glasses super shop. The company started selling low-priced, hipster-friendly glasses, but is now making waves with its signature Opera Binoculars.

Vintage is all the rage, and things don’t get much more vintage than this: these miniature binoculars first debuted in Victorian-era London. “They have a real passion for optics, so for every pair of Opera Binoculars you buy they give one pair to a child in need in Africa. Also they’re perfect for Bonaroo,” said one UCLA student.

 

Advertisements

#!/usr/bin/env python

import sys

for line in sys.stdin:
line = line.strip()
(id, fname, lname, addr, city, state, zip, job, email, active, salary) = line.split(“\t”)

if int(salary) >= 75000:
print “%s,1″ % state

—————————————-
#!/usr/bin/env python

import sys

previous_state = ”
count_for_state = 0

for line in sys.stdin:
line = line.strip()

(state, number) = line.split(“,”)

if state == previous_state:
count_for_state = count_for_state + int(number)
else:
if previous_state != ”:
if count_for_state >= 1:
print “%s\t%d” % (previous_state, count_for_state)
previous_state = state
count_for_state = int(number)

if count_for_state >= 1:
print “%s\t%d” % (state, count_for_state)
—————————————————–
#!/bin/sh

# Path of Hadoop streaming JAR library
STREAMJAR=/usr/lib/hadoop-0.20-mapreduce/contrib/streaming/hadoop-streaming-*.jar

# Directory in which we’ll store job output
OUTPUT=/user/training/empcounts

# Make sure we don’t have output from a previous run.
# The -r option removes the directory recursively, and
# the -f option prevents Hadoop from warning us if the
# directory doesn’t exist.
hadoop fs -rm -r -f $OUTPUT

# Run this job
hadoop jar $STREAMJAR \
-mapper mapper.py -file mapper.py \
-reducer reducer.py -file reducer.py \
-input /dualcore/employees \
-output $OUTPUT

Fix conky rings in Ubuntu 13.04

After install ubuntu 13.04, I had to make the following change to get conky to have a transparant background —

add the following lines to your conkyrc:

 

own_window_argb_visual yes
own_window_argb_value 200

 

 

partitions 101

Video guide:http://www.youtube.com/watch?v=98FBBnbqfUM

Chapter 6 read hat book.

Notes:

use fdisk -l to list hard drives
first sata is sta
second sata is sdb
partitions are listed as sda1. sda2 etc

to add a new partition to /dev/sdb/
-fdisk /dev/sdb/
-n (for new)
-p (for primary)
it will ask for partition number (1-4)
we are going to say 1
-it is a new disk to the first cylinder is at 1
-we then specify the size for the partition: +10G
-press p again to print the new partition table
-press w to write the partition table
-once the partitin is written we need to either reboot or use partprobe
-then we need to format the filesystem on the artition
-#mkfs.ext4 /dev/sdb2
– we are now going to mount this new partitin /dev/sdb1 to user1 home directory: mount /dev/sdb1 /home/user1
-we can manually mount and umount this partition.
-to have it automatically mount, edit the fstab file

Swap space is created in a similar manner to mkfs however, after creating the partition use

#mkswap /dev/sdb*

mount the sawp space(similar to mount syntax)

#swap on /dev/sdb*

Logical Volume

1.Create partition on hard drive

2. St up the partition as a Physical Volume (PV)

3. Set up the PV in Physical Extents (PE)

4. Convert the PE into logical extents (LE)

5. Logical exents can be fomated into logical volumes

greylog2

http://imcol.in/2012/05/centralized-logging-with-graylog2/

https://gist.github.com/ctavan/3097171

complete guide

http://petes-brain.com/2011/12/my-logging-setup-rsyslog-logstash-and-graylog2/

cent guide

http://blog.milford.io/2012/03/installing-graylog2-0-9-6-elasticsearch-0-18-7-mongodb-2-0-3-on-centos-5-with-rvm/

ubuntu guide

http://nikhgupta.com/code/installing-graylog2-on-ubunty-natty-11-04/

even more complete debian guide:

http://blog.thunter.ca/?p=31

best guide so far from /r/sysadmin

http://blog.dean.io/posts/getting-started-with-graylog2-for-logging-updated-for-0-9-6

http://spinscale.github.com/elasticsearch/2012-03-jugm.html#/12

 

so far:

Install the dependencies:

Java
sudo apt-get install openjdk-7-jre

Elasticsearch:

wget http://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-0.20.5.deb

dpkg -i elasticsearch-0-20.5.deb

Install the Elastic Search Service Wrapper which:

The service wrapper allows you to start, stop and restart elasticsearch using:

./elasticsearch/bin/service/elasticsearch start | stop | restart
http://www.elasticsearch.org/tutorials/2010/07/01/setting-up-elasticsearch.html

Modify the path
export PATH=$PATH:/usr/share/elasticsearch/bin/service

then start the service
sudo elasticsearch start
Then this:
ln -s `readlink -f elasticsearch/bin/service/elasticsearch` /usr/bin/elasticsearch_ctl
sed -i -e ‘s|# cluster.name: elasticsearch|cluster.name: graylog2|’ /etc/elasticsearch/elasticsearch.yml
/etc/init.d/elasticsearch start

Check to see if your working:
curl -XGET ‘http://localhost:9200/_cluster/health?pretty=true’
should return the following:

{
“cluster_name” : “elasticsearch”,
“status” : “green”,
“timed_out” : false,
“number_of_nodes” : 2,
“number_of_data_nodes” : 2,
“active_primary_shards” : 0,
“active_shards” : 0,
“relocating_shards” : 0,
“initializing_shards” : 0,
“unassigned_shards” : 0

pgbouncer

Download/Build libevent:

Download and install latest version ( 2.0 or higher) from http://monkey.org/~provos/libevent/
Extract the compressed file and run:
#./configure
#make
#make install

Untested alternate: sudo apt-get install libevent-dev

Download/Build pgbouncer:

Extract the compressed file and run:
#cd /tmp
#wget http://pgfoundry.org/frs/download.php/3085/pgbouncer-1.4.2.tgz
#tar xzf pgbouncer-1.4.2.tgz
#cd pgbouncer-1.4.2
#./configure –prefix=/usr/local –with-libevent=/usr/local
#make
#make install
config does not exists, you need to create it: sudo cp /usr/local/share/doc/pgbouncer/pgbouncer.ini /etc/pgbouncer.ini

–Change ownership of pgbouncer binariey to the postgres user

Brief information prior to editing the .ini:

Quick searches revealed a lot of forums posts and emails with people running in to problems configuring pgbouncer. I always like to understand what im configuring and what the settings do prior to making changes. It looks like Postgres auth for pgbouncer has changed between 8.x and 9.x. Full write up here: http://www.depesz.com/2010/12/04/auto-refreshing-password-file-for-pgbouncer/

In a nut shell, pgbouncer is configured to look at 8.0/main/global/pg_auth for authentication. However this file was removed in 9.0+. We need to manually create the authfile.

Setting up the pgbouncer.auth file

Syntax for the authfile is as follows:

“username” “password”

Multiple ways to create the auth file:

Example of psql being used to write the pgbouncer.auth file. psql can dump output to a file: http://raghavt.blogspot.com/2011/08/connection-pooling-with-pgbouncer-on.html

here is the psql query to show users and passwords:

postgres=# select rolname, rolpassword from pg_authid;

touch /path/to/pgbouncer.auth

paste into /path/to/pgbouncer.auth

Editing pgbouncer config:

 listen_addr = *

listen_port = 6432

auth_type =trust

auth_file =