OSX HD Cleanup for Developers

So you are an iOS Developer and Mac user, ain’t it? I’ll show you some tips I use to reclaim my HD space [=

As a Mac user you shold have TimeMachine enabled in order to keep your things backed-up .

You should know that Time Machine has by default the Automatic Backup enabled that takes some snapshots also on local storage waiting for a full backup on your external HD.

It’s show time, so open up a Terminal window!

To see these snapshots you do:

sudo tmutil listlocalsnapshots /
sudo tmutil listlocalsnapshots /
com.apple.TimeMachine.2018-10-02-150623
com.apple.TimeMachine.2018-10-03-120519
com.apple.TimeMachine.2018-10-03-130357
com.apple.TimeMachine.2018-10-03-140849
com.apple.TimeMachine.2018-10-03-150352

In order to delete a local snapshot you have to type:

sudo tmutil deletelocalsnapshots 2018-10-02-150623
sudo tmutil deletelocalsnapshots 2018-10-02-150623
Deleted local snapshot '2018-10-02-150623'

Were you used to disable localsnapshots, do you say?

With OSX High Sierra+ you cannot. (Without disabling automatic backups). Correct me if I’m wrong.

Anyway, if you need to reclaim some of your HD space you could delete all local snapshots in one shot:

sudo tmutil listlocalsnapshots / | sed 's/com.apple.TimeMachine.//g' | xargs -I % sudo tmutil deletelocalsnapshots %
$ sudo tmutil listlocalsnapshots / | sed 's/com.apple.TimeMachine.//g' | xargs -I % sudo tmutil deletelocalsnapshots %
Deleted local snapshot '2018-10-03-130357'
Deleted local snapshot '2018-10-03-140849'
Deleted local snapshot '2018-10-03-150352'
Deleted local snapshot '2018-10-03-160946'

Another trick you could use is to delete all the old Device Debug Support symbols that Xcode download every time you connect a new physial device to your Mac with a new iOS Version. So you will have, for example, debug symbols for 10.3, 10.3.1, 10.3.2, …, 11.0.0, 11.1.0, …, 11.4.1, … and so on.

This symbols are stored in the following directory:

~/Library/Developer/Xcode/iOS DeviceSupport/

Let’s check it out by entering the above directory and typing:

find . -type f -size +1G | xargs -I % du -h %

You sould see some big files (more than 1 giga):

find . -type f -size +1G | xargs -I % du -h %

1.0G ./11.3 (15E216)/Symbols/System/Library/Caches/com.apple.dyld/dyld_shared_cache_arm64

1.0G ./11.4 (15F79)/Symbols/System/Library/Caches/com.apple.dyld/dyld_shared_cache_arm64

1.0G ./11.3.1 (15E302)/Symbols/System/Library/Caches/com.apple.dyld/dyld_shared_cache_arm64

1.0G ./11.2.5 (15D60)/Symbols/System/Library/Caches/com.apple.dyld/dyld_shared_cache_arm64

1.0G ./11.4.1 (15G77)/Symbols/System/Library/Caches/com.apple.dyld/dyld_shared_cache_arm64

1.0G ./11.2.6 (15D100)/Symbols/System/Library/Caches/com.apple.dyld/dyld_shared_cache_arm64
If you need them no more… delete’em all! (or only the ones you need no more [=)
Hope this tips could be of any help to you. Let me know your tips if you do have different ones!
Bye!
OSX HD Cleanup for Developers

Google hacking and Exploits database

Per chi non sapesse cos’è il Google Hacking, non è nulla di illecito e nonostante il termine, non si deve essere degli esperti per poterlo fare.

Per Google Hacking si intende semplicemente l’utilizzo del motore di ricerca Google per trovare siti web “bucati” (vuoi perché messi online da amministratori improvvisati, vuoi perché messi online da amministratori di mestiere ma sbadati…).

Ovviamente qualcosa di avanzato lo si deve conoscere, ma parlo delle keyword (parole chiave) che si possono utilizzare nella barra di ricerca in Google:

La cosiddetta Ricerca Avanzata di Google è descritta anche sul loro sito: Google Advanced Search. Non le descrive tutte in realtà… ce ne sono molte di più.

Molte sono nascoste dietro a questa from per la ricerca avanzata direttamente su Google. Scegliete il tipo di ricerca che volete fare e premete “Ricerca avanzata”.

Ad esempio nell’immagine sopra ho selezionato termini che compaiono nel titolo della pagina e premuto “Ricerca Avanzata”.

E sono stato rediretto qui:

Dove Google mi ha già detto quale keyword utilizzare.

Quindi volendo cercare tutti i siti che hanno la parola “blog” nel titolo della pagina, basterà cercare

allintitle:blog

Okay. Ora avete la conoscenza. Avete studiato la teoria.

Quindi occorre ora fare un po’ di pratica.

Ma facciamo una pratica divertente. Ovvero facciamo prima pratica con le “best practice” della ricerca di Google Avanzata.

Le best practice le andiamo a prendere su Exploit-DB, un sito web manutenuto dai ragazzi di Offensive-Security (quelli di Kali Linux) dove sono manutenuti tutti gli Exploit conosciuti ad oggi, i codici shell code da utilizzare nei vostri exploit dei test casalinghi ovviamente 🙂

Se notate bene, nel menu c’è una bella sezione “Google Hacking Database“.

Bene, non teniamo la pipì e andiamo a farla subito. Clicchiamo nella sezione Google Hacking Database.

Scegliamo dal menù del contadino a km 0 quello che ci piace di più. Io ora scelgo:
Sensitive Directories

Lo so che state già leggendo le altre possibilità…

Bene, andate avanti voi, ma ricordate il detto.

Fatti i fatti tuoi, campi cent'anni.

Oh, se fossi un hacker o un phisher quasi quasi potrei anche mettere su un directory listing su un sito web apposta perché venga indicizzato da google e aspettare che qualcuno scarichi il mio virus per la curiosità… Quindi occhio ragazzi! 😉

 

Google hacking and Exploits database

Utilizzare Let’s Encrypt con IIS su Windows

Finalmente dopo un po’ di tempo qualcuno si è “preso la briga” di fare un wrapper del protocollo Certbot anche per windows integrato con IIS.

Ecco qui il link al progetto su GitHub: https://github.com/Lone-Coder/letsencrypt-win-simple/releases

E’ costruito a sua volta sopra il  .net ACME protocol library.

Basta scaricare lo zip e metterlo in una cartella sul vostro server.

Dopodichè con un prompt dei comandi con diritti di amministratore andate nella cartella e semplicemente lanciate il comando:

Letencrypt.exe

E seguite i vari passaggi.

  • inserire la mail
  • leggere e accettare le condizioni di contratto e policy
  • scegliere il sito IIS su cui richiedere il certificato e aspettare

Fa tutto lui:

  • Chiede il rilascio del certificato
  • Lo salva sul server
  • Crea il bindig sulla porta 443 con il certificato appena richiesto
  • Crea il job schedulato per il rinnovo automatico

Bene! Finalmente la richiesta dei certificati SSL no sarà più un problema anche su IIS! 😀

Utilizzare Let’s Encrypt con IIS su Windows

NMAP – Network Mapping #2 – Port Scanning

Il port scanning è il “core business” di Nmap.

Esso è l’atto di testare da remoto numerose porte per determinarne lo stato in cui sono.

La semplice esecuzione del comando

nmap <target>

fa la scansione di tutte  le prime 1.000 porte TCP classificandole negli stati

open, closed, filtered, unfiltered, open|filtered o closed|filtered

Lo stato più interessante è ovviamente lo stato “open” che vuol dire che su quella porta vi è in ascolto una applicazione pronta ad accettare connessioni.

NMap fa una distinzione differente delle porte rispetto allo IANA (Internet Assigned Numbers Authority) in base al loro numero.

Per lo standard IANA (http://www.iana.org/assignments/service-names-port-numbers/service-names-port-numbers.xhtml) abbiamo

  • well-known ports dalla 1 alla porta 1023
  • registered ports dalla 1024 alla 49151 (gli utenti con accesso non privilegiato possono collegare i loro servizi a queste porte)
  • dynamic e/o private ports dalla 49152 alla 65535 (numero massimo di porta: è un campo di 16bit)

Mentre NMap distigue le porte in

  • well-known ports 1-1023
  • ephemeral  ports il cui range dipende dal sistema operativo è su Linux va da 32768 a 61000 ed è configurabile (/proc/sys/net/ipv4/ip_local_port_range)

La porta numero 0 (zero) è invalida. Le API delle socket Berkeley, che definiscono come i programmi devono comportarsi per quanto riguarda le comunicazioni di rete, non permettono l’utilizzo della porta numero zero. Invece, interpretano una richiesta sulla porta zero come una wildcard a indicare che al programmatore non interessa quale porta verrà utilizzata per la comunicazione e sarà il sistema operativo a sceglierne una per lui.

Questo non significa che qualche software malevolo non voglia utilizzare proprio questa porta per comunicare esplicitandola nell’heder TCP/IP. Quindi possiamo con NMap scansionare questa porta esplicitamente (e.g. -p0-65535)

Secondo l’autore di NMap, dopo anni di scansioni, queste sono le liste delle Top-20 porte TCP e UDP

Top-20 TCP ports

  1. 80 (HTTP)
  2. 23 (TELNET)
  3. 443 (HTTPS)
  4. 21 (FTP)
  5. 22 (SSH)
  6. 25 (SMTP)
  7. 3389 (ms-term-server) Microsoft Terminal Services Admin port
  8. 110 (POP3)
  9. 445 (Microsoft-DS) Microsoft SMB file/printer sharing
  10. 139 (NetBIOS-SSN)
  11. 143 (IMAP)
  12. 53 (DNS)
  13. 135 (MSRPC)
  14. 3306 (MySQL)
  15. 8080 (HTTP) di solito per fare del proxying
  16. 1723 (PPTP) per VPN
  17. 111 (RPCBind)
  18. 995 (POP3S) pop3 + ssl
  19. 993 (IMAPS) imap + ssl
  20. 5900 (VNC)

Top-20 UDP ports

  1. 631 (IPP) Internet Printing Protocol
  2. 161 (SNMP)
  3. 137 (NetBIOS-NS)
  4. 123 (NTP)
  5. 138 (NetBIOS-DGM)
  6. 1434 (Microsoft-DS) Microsoft SQL Server
  7. 445 (Microsoft-DS)
  8. 135 (MSRPC)
  9. 67 (DHCPS) dhcp + ssl
  10. 53 (DNS)
  11. 139 (NetBIOS-SSN)
  12. 500 (ISAKMP) Internet Security Association and Key Management Protocol
  13. 68 (DHCP client)
  14. 520 (RIP) Routing Information Protocol
  15. 1900 (UPNP)
  16. 4500 (nat-t-ike)
  17. 514 (syslog)
  18. 49152 (Varies)
  19. 162 (SNMPTrap)
  20. 69 (TFTP)

Ad inizio capitolo abbiamo detto che una porta si può trovare in 6 stati differenti

  1. open
    una applicazione è in ascolto e pronta ad accettare connessioni TCP o pacchetti UDP. Qui si può fare breccia.
  2. close
    la porta è accessibile (riceve e risponde alle sonde di NMap) ma non vi è nessuna applicazione in ascolto su di essa. Utile per host discovery o OS detection.
  3. filtered
    non sappiamo se la porta è aperta perché c’è una sorta di packet filtering che non ci permette di raggiungerla (firewall, regole di routing). Queste porte rallentano lo scan perché NMap cerca di ripetere lo scan in caso non abbia ricevuto risposte dovuto al fatto che si ha una rete congestionata. Ma di solito non è così.
  4. unfiltered
    la porta è raggiungibile ma non si riesce a distinguere se è aperta o chiusa. Solo l’ACK scan classifica le porta in questo stato.
  5. open|filtered
    NMap non riesce a capire se la porta è aperta o filtrata. Di solito succede per porte aperte che non danno risposte. La mancanza di risposta potrebbe dipendere anche da un filtro di pacchetto.
  6. closed|filtered
    NMap non riesce a capire se la porta è chiusa o filtrata. Stato impostato solo se si utilizza l’IP ID Idle scan (-sI).

Fare un port scanning della rete non è solo un modo per avere la lista di tutti i servizi aperti per motivi di sicurezza. Alcuni utilizzano il port scanning anche per fare un inventario delle macchine e dei dispositivi di rete e dei loro servizi, per scoprire la topologia della loro rete o per controlli di verifica di politiche di vario genere.

NMAP – Network Mapping #2 – Port Scanning

NMAP – Network Mapping #1.1 – Ping Scanning Host Discovery Controls

-sL

Se vogliamo solamente enumerare, senza far partire alcun controllo, gli host target che abbiamo selezionato, utilizziamo l’opzione -sL

Per esempio:

Fabios-MacBook-Air:~ shadowsheep$ nmap 192.168.1.0/30 -sL

Starting Nmap 7.12 ( https://nmap.org ) at 2016-12-03 09:52 CET
Nmap scan report for 192.168.1.0
Nmap scan report for 192.168.1.1
Nmap scan report for 192.168.1.2
Nmap scan report for 192.168.1.3
Nmap done: 4 IP addresses (0 hosts up) scanned in 0.04 seconds

di default il reverse-DNS viene eseguito:

Fabios-MacBook-Air:~ shadowsheep$ ping versionestabile.it
PING www.versionestabile.it (62.149.142.224): 56 data bytes

Fabios-MacBook-Air:~ shadowsheep$ nmap 62.149.142.224 -sL

Starting Nmap 7.12 ( https://nmap.org ) at 2016-12-03 09:54 CET
Nmap scan report for webx458.aruba.it (62.149.142.224)
Nmap done: 1 IP address (0 hosts up) scanned in 0.06 seconds

Per vedere i 16 IP a partire dall’IP del mio sito scegliendo per il reverse-DNS il server DNS di Google, possiamo eseguire quindi:

Fabios-MacBook-Air:~ shadowsheep$ nmap www.versionestabile.it/28 -sL --dns-server 8.8.8.8

Starting Nmap 7.12 ( https://nmap.org ) at 2016-12-03 10:01 CET
Nmap scan report for www.versionestabile.it (62.149.142.224)
rDNS record for 62.149.142.224: webx458.aruba.it
Nmap scan report for webx459.aruba.it (62.149.142.225)
Nmap scan report for webx460.aruba.it (62.149.142.226)
Nmap scan report for webx461.aruba.it (62.149.142.227)
Nmap scan report for webx462.aruba.it (62.149.142.228)
Nmap scan report for webx463.aruba.it (62.149.142.229)
Nmap scan report for webx464.aruba.it (62.149.142.230)
Nmap scan report for webx465.aruba.it (62.149.142.231)
Nmap scan report for webx466.aruba.it (62.149.142.232)
Nmap scan report for webx467.aruba.it (62.149.142.233)
Nmap scan report for webx468.aruba.it (62.149.142.234)
Nmap scan report for webx469.aruba.it (62.149.142.235)
Nmap scan report for webx470.aruba.it (62.149.142.236)
Nmap scan report for webx471.aruba.it (62.149.142.237)
Nmap scan report for webx472.aruba.it (62.149.142.238)
Nmap scan report for webx473.aruba.it (62.149.142.239)
Nmap done: 16 IP addresses (0 hosts up) scanned in 0.10 seconds

-sn (o -sP nelle versioni precedenti)

Questa opzione fa eseguire solamente un ping scan (aka ping sweep) viene inviata una richiesta ICMP echo e un pacchetto TCP ACK alla porta 80 di default.

Se un utente non ha i diritti per inviare un RAW TCP ACK, viene inviato al suo posto un TCP SYN.

Se un utente con i diritti fa una scansion su rete locale viene inviata anche una richiesta ARP (-PR) a meno che non venga impostata l’opzione –send-ip.

Non vengono abilitati i controlli come il port scan e l’OS detection anche se specificati.

Solamente l’ NSE (Nmap Script Engine) –script e il trace route –traceroute se specificati vengono eseguiti.

Fabios-MacBook-Air:~ shadowsheep$ nmap -sn -T4 www.versionestabile.it/30

Starting Nmap 7.12 ( https://nmap.org ) at 2016-12-03 10:09 CET
Nmap scan report for www.versionestabile.it (62.149.142.224)
Host is up (0.033s latency).
rDNS record for 62.149.142.224: webx458.aruba.it
Nmap scan report for webx459.aruba.it (62.149.142.225)
Host is up (0.029s latency).
Nmap scan report for webx460.aruba.it (62.149.142.226)
Host is up (0.026s latency).
Nmap scan report for webx461.aruba.it (62.149.142.227)
Host is up (0.032s latency).
Nmap done: 4 IP addresses (4 hosts up) scanned in 0.10 seconds

-Pn (o -PN)

Disabilita il ping scan e tratta tutti gli host come attivi, applicando le fasi successive a tutti gli host specificati invece che solo a quelli che sono risultati attivi dal ping scan.

Questa funzione è utile nel caso alcuni host non rispondano alle sonde oppure siano ben coperti da firewall.

-PS<port list>

Invia un pacchetto TCP vuoto con il flag SYN attivo.

La porta di destinazione di default è la porta 80, ma è possibile sceglierne una alternativa passandola come parametro -PS<port list>.
E’ anche possible passare un range di porte, per es:

-PS22-25,80,53,113

In questo caso le sonde verranno inviate in parallelo ad ogni porta specificata.

Questo pacchetto interroga una porta dicendole che si vuole stabilire una connessione. Normalmente la porta è chiusa e quindi riceveremo indietro un pacchetto TCP con il flag RST (reset).

Se la porta invece è aperta riceveremo il secondo step del three-way-handshake, ovvero un pacchetto TCP con attivi i flash synch e acknowledge SYN/ACK.

Entrambe le situazioni ci dicono che l’host target è vivo.

-PA<port list>

Come prima ma invia un pacchetto TCP vuoto con il flag ACK attivo.

In questo caso se l’host è vivo invierà sempre un pacchetto di risposta TCP RST perché non ha ricevuto precedentemente nessun pacchetto TCP SYN, ne tantomeno inviato un TCP SYN/ACK.

Ci rivelerà comunque la sua presenza.

Perché usare -PA piuttosto che -PS? Perché alcuni firewall bloccano su alcune porte volontariamente i pacchetti TCP SYN (e.g. iptables –syn)

Se invece il firewall se invece il firewall è di tipo stateful, categorizzerà il pacchetto con lo stat o INVALID perché non è associato a nessuna connessione (e.g. iptables –state). In questo caso l’opzione -PS invece riuscirebbe a fornirci l’esistenza dell’host.

-PU<port list>

UDP Ping, è un’altra tecnica per scoprire se un host è presente o meno.

Viene inviato un pacchetto UDP vuoto (a meno di non usare –data-length) alle porte specificate (default a 31,338).

Se l’host è presente e la porta è chiusa otterremo come risposta un pacchetto ICMP port unreachable.

Se la porta è aperta invece il pacchetto verrà ignorato e la sonda fallirà.

Questa sonda evita i firewall che sono configurati solamente per il protocollo TCP.

-PE, -PP, e -PM

ICMP Ping non fa altro che inviare i pacchetti standard che fa il nostro ben conosciuto comando ping.

-PE invia una richiesta ICMP di tipo 8 (echo request) e si aspetta come risposta un ICMP di tipo 0 (echo reply).

-PP invia una richiesta ICMP di tipo 15 (timestamp request) e si aspetta una risposta ICMP di tipo 14 (timestamp reply)

-PM invia una richiesta ICMP di tipo 17 (mask request) e si aspetta una risposta ICMP di tipo 18 (mask reply)

-PP e -PM possono essere utili quando questi tipi di pacchetti ICMP non sono filtrati.

-PO<protcol list>

IP Protocol Ping invia pacchetti IP con il numero di protocollo specificato nell’header.

Il default sono ICMP (protocol 1), IGMP (protocol 2), IP-in-IP (protocol 4).

Quello che ci aspettiamo di ricevere se un host è presente è un messaggio ICMP protocol unreacheable che significa che la porta non parla quel protocollo oppure una risposta dello stesso tipo di protocollo.

-PR

ARP Scan, che viene fatto di default in ogni range di reti LAN, è in questi casi (reti LAN) molto più veloce e molto più sicuro in quanto molti host hanno l’echo reply disabilitato (ping) ma non possono non rispondere ad una richiesta ARP!

Nelle LAN inoltre il ping scan è problematico perché

  1. Il PC per rispondere ad un ICMP echo request deve prima fare una richiesta ARP per sapere a chi appartiene l’indirizzo IP della richiesta.
  2. Richiesta ARP incomplete vengono salvate nella tabella ARP dell’host, tabella che è limitata. Alcuni OS si comportano in maniera strana quando riempiono tale tabella.

Ad esempio possiamo inviare un ping ARP con un frame ethernet (layer 2) invece che un pacchetto IP (layer 3) in questo modo.

Fabios-MacBook-Air:~ shadowsheep$ sudo nmap -n -sP -PR --send-eth --packet-trace 192.168.1.24

Starting Nmap 7.12 ( https://nmap.org ) at 2016-12-03 15:11 CET
SENT (0.0100s) ARP who-has 192.168.1.24 tell 192.168.1.2
SENT (0.2171s) ARP who-has 192.168.1.24 tell 192.168.1.2
RCVD (0.2337s) ARP reply 192.168.1.24 is-at B0:C5:54:[...]
Nmap scan report for 192.168.1.24
Host is up (0.017s latency).
MAC Address: B0:C5:54:[...] (D-Link)
Nmap done: 1 IP address (1 host up) scanned in 0.28 seconds

In questo caso -PR e –send-eth sarebbero stati fatti di default perché siamo in LAN.

Mentre –packet-trace ci permette di visualizzare i pacchetti inviati e ricevuti.

Se non si vuole assolutamente che venga fatta uno scan ARP occorre dirlo esplicitamente con l’opzione –send-ip.

Puoi cambiare il MAC address da cui fai la richiesta ( 🙂 ) con l’opzione –spoof-mac.

Combinazione di default

Se lanciamo nmap senza specificare nulla, il default è:  -PA -PE.

Altre opzioni di rilievo

-v (–verbose): nmap non stampa solo gli host attivi ma molte altre informazioni su di essi.

–source-port <portnum> (-g): setta come costante la porta sorgente del ping scanning.

–data-length <lenght>: aggiunge byte random di dati ad ogni pacchetto e funziona con gli scan TCP, UDP e ICMP. Questo permette di evadere alcuni firewall che hanno regole per droppare i pacchetti vuoti.

-T<timing>: velocizza l’esecuzione del ping scan. Valori più alti corrispondono a minor tempo. -T4 è raccomandato.

Ce ne sono altri e potete leggerli a pag 67 del manuale di nmap o nelle pagine man.

Lista delle 14 porte con maggior successo di accessibilità

  1. 80/http
  2. 25/smtp
  3. 22/ssh
  4. 443/https
  5. 21/ftp
  6. 113/auth
  7. 23/telnet
  8. 53/domain
  9. 554/rtsp
  10. 3389/ms-trem-server
  11. 1723/pptp
  12. 389/ldap
  13. 636/ldapssl
  14. 256/FW1-seruremote

Una combinazione “ideale”

nmap -PE -PA PS21,22,23,25,80,113,31339 -PA80,113,443,10042 -T4 –source-port 53 -iL <file with hosts> -oA <output file>

Man Section

HOST DISCOVERY:
             -sL: List Scan - simply list targets to scan
             -sn: Ping Scan - disable port scan
             -Pn: Treat all hosts as online -- skip host discovery
             -PS/PA/PU/PY[portlist]: TCP SYN/ACK, UDP or SCTP discovery to given ports
             -PE/PP/PM: ICMP echo, timestamp, and netmask request discovery probes
             -PO[protocol list]: IP Protocol Ping
             -n/-R: Never do DNS resolution/Always resolve [default: sometimes]
             --dns-servers <serv1[,serv2],...>: Specify custom DNS servers
             --system-dns: Use OS's DNS resolver
             --traceroute: Trace hop path to each host
NMAP – Network Mapping #1.1 – Ping Scanning Host Discovery Controls

NMAP – Network Mapping #1 – Ping Scanning Hosts Definition

Nella seconda fase di NMap abbiamo la cosiddetta “host discovery”.

Ci sono molte tecniche di host discovery che possiamo fare con NMap.

Vediamo alcuni esempi.

Possiamo specificare l’host o gli host nei seguenti modi:

  • IP Address: 192.168.1.1
  • Hostname: gateway.example.com
  • CIDR (Classless Inter-Domain Routing): 192.168.1.0/24
  • Octets: 192.168.1.0-255
  • Random: -iR <#numero host da geneare> (!!0 per tutto internet!!)
    Come dice la documentazione, se vi trovate molto annoiati in una giornata di pioggia, potete provare a lanciare il seguente comando:
    nmap -sS -PS80 -iR 0 -p 80
    per cercare a caso i web server nella rete.
  • Da file: -iL <filename> oppure “-” (hyphen) per input da standard input.
    ad esempio per cercare tutti gli host attualmente “vivi” a cui avete dato un lease:
    egrep ‘^lease’ /var/lib/dhcp/dhcp.leases | awk ‘{print $2}’ | name -iL –

Dalla vostra lista di host potete escludere dei valori con le seguenti opzioni:

–exclude e valori separati da virgola (niente spazi!!!)
–excludefile <filename> (il file contiene gli host da escludere nei formati accettati da nmap).

DNS resolution

Di default, nmap esegue il reverse-DNS sugli IP che rispondono alle sonde del Ping Scanning (ovvero solo per gli host attivi).

E’ possibile fare il reverse-DNS su tutti gli host con l’opzione -R.

Per non eseguire il reverse-DNS invece l’opzione è -n.

Per utilizzare uno o più server DNS (diversi da quello di sistema) utilizzare
–dns-server <server1>[,<server2>…]

Man Section

TARGET SPECIFICATION:
             Can pass hostnames, IP addresses, networks, etc.
             Ex: scanme.nmap.org, microsoft.com/24, 192.168.0.1; 10.0.0-255.1-254
             -iL <inputfilename>: Input from list of hosts/networks
             -iR <num hosts>: Choose random targets
             --exclude <host1[,host2][,host3],...>: Exclude hosts/networks
             --excludefile <exclude_file>: Exclude list from fil
NMAP – Network Mapping #1 – Ping Scanning Hosts Definition

Integrare Let’s Encrypt con Apache utilizzando Certbot

letsencrypt

E’ sempre buona norma fare un update dei pacchetti (io faccio di solito anche l’upgrade – provato prima in staging)

sudo apt-get update

Se non abbiamo git installato git lo installiamo

sudo apt-get install git

Facciamo il clone di certbot dal repository

git clone https://github.com/certbot/certbot /opt/certbot
cd certbot
./certbot-auto --help

certbot-auto accetta tutti gli stessi parametri di certbot; installa tutte le sue dipendenze e si prende cura di aggiornarsi automaticament.

Per utilizzarlo con il plugin nativo di Apache si utilizza

./certbot-auto --apache

Per richiedere un certificato possiamo lanciare il seguente comando che installa un certificato di root per il dominio www.miodominio.it e miodominio.it

./certbot-auto --apache -d miodominio.it -d www.miodominio.it

A qusto punto partira un wizard in cui la cosa importante è scegliere se gestire sia le richieste http e https o redirigere tutte le richiesete in https.

Se tutto termina correttamente vederete una cosa simile a questa:

certbot

Quando il tutorial sarà finito si potranno trovare i certificati generati in

/etc/certbot/live/

oppure se avevate utilizzato il plugin beta letsencrypt e avete aggiornato il precedente certificato lo trovate qui

/etc/letsencrypt/live/

Potete verificare lo stato del certificato qui:

https://www.ssllabs.com/ssltest/analyze.html?d=miosito.it&latest

Adesso provate ad accedere al vostro sito in https!

Certbot ha anche molte funzionalità di autorinnovo e via dicendo.

Potete simularla con il seguente comando.

./certbot-auto renew --dry-run

Ogni volta che vogliamo aggiornare il plugin (se vengono rilasciati degli aggiornamenti) possiamo lanciare il comando

cd /opt/certbot
sudo git pull

Per tutto il resto vi rimando alla documentazione ufficiale!

https://letsencrypt.org/getting-started/

Integrare Let’s Encrypt con Apache utilizzando Certbot

Sistema di notifiche automatizzato: Pushover.net

pushover

Avete mai avuto la necessità di essere avvistati da un vostro sistema di monitoraggio, che sia uno script shell, o python o najos o un vostro applicativo e avete cercato un buon metodo di notifica degli eventi. Non avrete mica scelto una e-mail!!!! O_O Quanto è affidabile una e-mail? E la mail l’avete configurata one-time o ripetitiva fino ad un acknowledge?

Eppene se queste sono state le vostre esigenze vi consiglio di scoprire pushover.net.

Buon monitoraggio a tutti!

Sistema di notifiche automatizzato: Pushover.net

La codifica dei caratteri… questa sconosciuta!

Vi prego, leggete questo articolo perché nonostante sia stato scritto nel 2o03 è ancora attualissimo!

http://www.joelonsoftware.com/articles/Unicode.html

Buona lettura!

Riporto qui in calce nel caso l’articolo vada offline:

The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!)

by Joel Spolsky
Wednesday, October 08, 2003

Ever wonder about that mysterious Content-Type tag? You know, the one you’re supposed to put in HTML and you never quite know what it should be?

Did you ever get an email from your friends in Bulgaria with the subject line “???? ?????? ??? ????”?
I’ve been dismayed to discover just how many software developers aren’t really completely up to speed on the mysterious world of character sets, encodings, Unicode, all that stuff. A couple of years ago, a beta tester for FogBUGZ was wondering whether it could handle incoming email in Japanese. Japanese? They have email in Japanese? I had no idea. When I looked closely at the commercial ActiveX control we were using to parse MIME email messages, we discovered it was doing exactly the wrong thing with character sets, so we actually had to write heroic code to undo the wrong conversion it had done and redo it correctly. When I looked into another commercial library, it, too, had a completely broken character code implementation. I corresponded with the developer of that package and he sort of thought they “couldn’t do anything about it.” Like many programmers, he just wished it would all blow over somehow.

But it won’t. When I discovered that the popular web development tool PHP has almost complete ignorance of character encoding issues, blithely using 8 bits for characters, making it darn near impossible to develop good international web applications, I thought, enough is enough.

So I have an announcement to make: if you are a programmer working in 2003 and you don’t know the basics of characters, character sets, encodings, and Unicode, and I catch you, I’m going to punish you by making you peel onions for 6 months in a submarine. I swear I will.

And one more thing:

IT’S NOT THAT HARD.

In this article I’ll fill you in on exactly what every working programmer should know. All that stuff about “plain text = ascii = characters are 8 bits” is not only wrong, it’s hopelessly wrong, and if you’re still programming that way, you’re not much better than a medical doctor who doesn’t believe in germs. Please do not write another line of code until you finish reading this article.

Before I get started, I should warn you that if you are one of those rare people who knows about internationalization, you are going to find my entire discussion a little bit oversimplified. I’m really just trying to set a minimum bar here so that everyone can understand what’s going on and can write code that has a hope of working with text in any language other than the subset of English that doesn’t include words with accents. And I should warn you that character handling is only a tiny portion of what it takes to create software that works internationally, but I can only write about one thing at a time so today it’s character sets.

A Historical Perspective

The easiest way to understand this stuff is to go chronologically.

You probably think I’m going to talk about very old character sets like EBCDIC here. Well, I won’t. EBCDIC is not relevant to your life. We don’t have to go that far back in time.

ASCII tableBack in the semi-olden days, when Unix was being invented and K&R were writing The C Programming Language, everything was very simple. EBCDIC was on its way out. The only characters that mattered were good old unaccented English letters, and we had a code for them called ASCIIwhich was able to represent every character using a number between 32 and 127. Space was 32, the letter “A” was 65, etc. This could conveniently be stored in 7 bits. Most computers in those days were using 8-bit bytes, so not only could you store every possible ASCII character, but you had a whole bit to spare, which, if you were wicked, you could use for your own devious purposes: the dim bulbs at WordStar actually turned on the high bit to indicate the last letter in a word, condemning WordStar to English text only. Codes below 32 were called unprintable and were used for cussing. Just kidding. They were used for control characters, like 7 which made your computer beep and 12 which caused the current page of paper to go flying out of the printer and a new one to be fed in.

And all was good, assuming you were an English speaker.

Because bytes have room for up to eight bits, lots of people got to thinking, “gosh, we can use the codes 128-255 for our own purposes.” The trouble was,lots of people had this idea at the same time, and they had their own ideas of what should go where in the space from 128 to 255. The IBM-PC had something that came to be known as the OEM character set which provided some accented characters for European languages and a bunch of line drawing characters… horizontal bars, vertical bars, horizontal bars with little dingle-dangles dangling off the right side, etc., and you could use these line drawing characters to make spiffy boxes and lines on the screen, which you can still see running on the 8088 computer at your dry cleaners’. In fact  as soon as people started buying PCs outside of America all kinds of different OEM character sets were dreamed up, which all used the top 128 characters for their own purposes. For example on some PCs the character code 130 would display as é, but on computers sold in Israel it was the Hebrew letter Gimel (ג), so when Americans would send their résumés to Israel they would arrive asrגsumגs. In many cases, such as Russian, there were lots of different ideas of what to do with the upper-128 characters, so you couldn’t even reliably interchange Russian documents.

Eventually this OEM free-for-all got codified in the ANSI standard. In the ANSI standard, everybody agreed on what to do below 128, which was pretty much the same as ASCII, but there were lots of different ways to handle the characters from 128 and on up, depending on where you lived. These different systems were called code pages. So for example in Israel DOS used a code page called 862, while Greek users used 737. They were the same below 128 but different from 128 up, where all the funny letters resided. The national versions of MS-DOS had dozens of these code pages, handling everything from English to Icelandic and they even had a few “multilingual” code pages that could do Esperanto and Galician on the same computer! Wow!But getting, say, Hebrew and Greek on the same computer was a complete impossibility unless you wrote your own custom program that displayed everything using bitmapped graphics, because Hebrew and Greek required different code pages with different interpretations of the high numbers.

Meanwhile, in Asia, even more crazy things were going on to take into account the fact that Asian alphabets have thousands of letters, which were never going to fit into 8 bits. This was usually solved by the messy system called DBCS, the “double byte character set” in whichsome letters were stored in one byte and others took two. It was easy to move forward in a string, but dang near impossible to move backwards. Programmers were encouraged not to use s++ and s– to move backwards and forwards, but instead to call functions such as Windows’ AnsiNext and AnsiPrev which knew how to deal with the whole mess.

But still, most people just pretended that a byte was a character and a character was 8 bits and as long as you never moved a string from one computer to another, or spoke more than one language, it would sort of always work. But of course, as soon as the Internet happened, it became quite commonplace to move strings from one computer to another, and the whole mess came tumbling down. Luckily, Unicode had been invented.

Unicode

Unicode was a brave effort to create a single character set that included every reasonable writing system on the planet and some make-believe ones like Klingon, too. Some people are under the misconception that Unicode is simply a 16-bit code where each character takes 16 bits and therefore there are 65,536 possible characters. This is not, actually, correct. It is the single most common myth about Unicode, so if you thought that, don’t feel bad.

In fact, Unicode has a different way of thinking about characters, and you have to understand the Unicode way of thinking of things or nothing will make sense.

Until now, we’ve assumed that a letter maps to some bits which you can store on disk or in memory:

A -> 0100 0001

In Unicode, a letter maps to something called a code point which is still just a theoretical concept. How that code point is represented in memory or on disk is a whole nuther story.

In Unicode, the letter A is a platonic ideal. It’s just floating in heaven:

A

This platonic A is different than B, and different from a, but the same as A and A and A. The idea that A in a Times New Roman font is the same character as the A in a Helvetica font, but differentfrom “a” in lower case, does not seem very controversial, but in some languages just figuring out what a letter is can cause controversy. Is the German letter ß a real letter or just a fancy way of writing ss? If a letter’s shape changes at the end of the word, is that a different letter? Hebrew says yes, Arabic says no. Anyway, the smart people at the Unicode consortium have been figuring this out for the last decade or so, accompanied by a great deal of highly political debate, and you don’t have to worry about it. They’ve figured it all out already.

Every platonic letter in every alphabet is assigned a magic number by the Unicode consortium which is written like this: U+0639.  This magic number is called a code point. The U+ means “Unicode” and the numbers are hexadecimal. U+0639 is the Arabic letter Ain. The English letter A would be U+0041. You can find them all using thecharmap utility on Windows 2000/XP or visiting the Unicode web site.

There is no real limit on the number of letters that Unicode can define and in fact they have gone beyond 65,536 so not every unicode letter can really be squeezed into two bytes, but that was a myth anyway.

OK, so say we have a string:

Hello

which, in Unicode, corresponds to these five code points:

U+0048 U+0065 U+006C U+006C U+006F.

Just a bunch of code points. Numbers, really. We haven’t yet said anything about how to store this in memory or represent it in an email message.

Encodings

That’s where encodings come in.

The earliest idea for Unicode encoding, which led to the myth about the two bytes, was, hey, let’s just store those numbers in two bytes each. So Hello becomes

00 48 00 65 00 6C 00 6C 00 6F

Right? Not so fast! Couldn’t it also be:

48 00 65 00 6C 00 6C 00 6F 00 ?

Well, technically, yes, I do believe it could, and, in fact, early implementors wanted to be able to store their Unicode code points in high-endian or low-endian mode, whichever their particular CPU was fastest at, and lo, it was evening and it was morning and there were already two ways to store Unicode. So the people were forced to come up with the bizarre convention of storing a FE FF at the beginning of every Unicode string; this is called a Unicode Byte Order Mark and if you are swapping your high and low bytes it will look like a FF FE and the person reading your string will know that they have to swap every other byte. Phew. Not every Unicode string in the wild has a byte order mark at the beginning.

For a while it seemed like that might be good enough, but programmers were complaining. “Look at all those zeros!” they said, since they were Americans and they were looking at English text which rarely used code points above U+00FF. Also they were liberal hippies in California who wanted to conserve (sneer). If they were Texans they wouldn’t have minded guzzling twice the number of bytes. But those Californian wimps couldn’t bear the idea of doubling the amount of storage it took for strings, and anyway, there were already all these doggone documents out there using various ANSI and DBCS character sets and who’s going to convert them all? Moi? For this reason alone most people decided to ignore Unicode for several years and in the meantime things got worse.

Thus was invented the brilliant concept of UTF-8. UTF-8 was another system for storing your string of Unicode code points, those magic U+ numbers, in memory using 8 bit bytes. In UTF-8, every code point from 0-127 is stored in a single byte. Only code points 128 and above are stored using 2, 3, in fact, up to 6 bytes.

How UTF-8 works

This has the neat side effect that English text looks exactly the same in UTF-8 as it did in ASCII, so Americans don’t even notice anything wrong. Only the rest of the world has to jump through hoops. Specifically, Hello, which was U+0048 U+0065 U+006C U+006C U+006F, will be stored as 48 65 6C 6C 6F, which, behold! is the same as it was stored in ASCII, and ANSI, and every OEM character set on the planet. Now, if you are so bold as to use accented letters or Greek letters or Klingon letters, you’ll have to use several bytes to store a single code point, but the Americans will never notice. (UTF-8 also has the nice property that ignorant old string-processing code that wants to use a single 0 byte as the null-terminator will not truncate strings).

So far I’ve told you three ways of encoding Unicode. The traditional store-it-in-two-byte methods are called UCS-2 (because it has two bytes) or UTF-16 (because it has 16 bits), and you still have to figure out if it’s high-endian UCS-2 or low-endian UCS-2. And there’s the popular new UTF-8 standard which has the nice property of also working respectably if you have the happy coincidence of English text and braindead programs that are completely unaware that there is anything other than ASCII.

There are actually a bunch of other ways of encoding Unicode. There’s something called UTF-7, which is a lot like UTF-8 but guarantees that the high bit will always be zero, so that if you have to pass Unicode through some kind of draconian police-state email system that thinks 7 bits are quite enough, thank you it can still squeeze through unscathed. There’s UCS-4, which stores each code point in 4 bytes, which has the nice property that every single code point can be stored in the same number of bytes, but, golly, even the Texans wouldn’t be so bold as to waste that much memory.

And in fact now that you’re thinking of things in terms of platonic ideal letters which are represented by Unicode code points, those unicode code points can be encoded in any old-school encoding scheme, too! For example, you could encode the Unicode string for Hello (U+0048 U+0065 U+006C U+006C U+006F) in ASCII, or the old OEM Greek Encoding, or the Hebrew ANSI Encoding, or any of several hundred encodings that have been invented so far, with one catch: some of the letters might not show up! If there’s no equivalent for the Unicode code point you’re trying to represent in the encoding you’re trying to represent it in, you usually get a little question mark: ? or, if you’rereally good, a box. Which did you get? -> �

There are hundreds of traditional encodings which can only storesome code points correctly and change all the other code points into question marks. Some popular encodings of English text are Windows-1252 (the Windows 9x standard for Western European languages) and ISO-8859-1, aka Latin-1 (also useful for any Western European language). But try to store Russian or Hebrew letters in these encodings and you get a bunch of question marks. UTF 7, 8, 16, and 32 all have the nice property of being able to store any code point correctly.

The Single Most Important Fact About Encodings

If you completely forget everything I just explained, please remember one extremely important fact. It does not make sense to have a string without knowing what encoding it uses. You can no longer stick your head in the sand and pretend that “plain” text is ASCII.

There Ain’t No Such Thing As Plain Text.

If you have a string, in memory, in a file, or in an email message, you have to know what encoding it is in or you cannot interpret it or display it to users correctly.

Almost every stupid “my website looks like gibberish” or “she can’t read my emails when I use accents” problem comes down to one naive programmer who didn’t understand the simple fact that if you don’t tell me whether a particular string is encoded using UTF-8 or ASCII or ISO 8859-1 (Latin 1) or Windows 1252 (Western European), you simply cannot display it correctly or even figure out where it ends. There are over a hundred encodings and above code point 127, all bets are off.

How do we preserve this information about what encoding a string uses? Well, there are standard ways to do this. For an email message, you are expected to have a string in the header of the form

Content-Type: text/plain; charset=”UTF-8″

For a web page, the original idea was that the web server would return a similar Content-Type http header along with the web page itself — not in the HTML itself, but as one of the response headers that are sent before the HTML page.

This causes problems. Suppose you have a big web server with lots of sites and hundreds of pages contributed by lots of people in lots of different languages and all using whatever encoding their copy of Microsoft FrontPage saw fit to generate. The web server itself wouldn’t really know what encoding each file was written in, so it couldn’t send the Content-Type header.

It would be convenient if you could put the Content-Type of the HTML file right in the HTML file itself, using some kind of special tag. Of course this drove purists crazy… how can you read the HTML file until you know what encoding it’s in?! Luckily, almost every encoding in common use does the same thing with characters between 32 and 127, so you can always get this far on the HTML page without starting to use funny letters:

<html>
<head>
<meta http-equiv=“Content-Type” content=“text/html; charset=utf-8”>

But that meta tag really has to be the very first thing in the <head> section because as soon as the web browser sees this tag it’s going to stop parsing the page and start over after reinterpreting the whole page using the encoding you specified.

What do web browsers do if they don’t find any Content-Type, either in the http headers or the meta tag? Internet Explorer actually does something quite interesting: it tries to guess, based on the frequency in which various bytes appear in typical text in typical encodings of various languages, what language and encoding was used. Because the various old 8 bit code pages tended to put their national letters in different ranges between 128 and 255, and because every human language has a different characteristic histogram of letter usage, this actually has a chance of working. It’s truly weird, but it does seem to work often enough that naïve web-page writers who never knew they needed a Content-Type header look at their page in a web browser and it looks ok, until one day, they write something that doesn’t exactly conform to the letter-frequency-distribution of their native language, and Internet Explorer decides it’s Korean and displays it thusly, proving, I think, the point that Postel’s Law about being “conservative in what you emit and liberal in what you accept” is quite frankly not a good engineering principle. Anyway, what does the poor reader of this website, which was written in Bulgarian but appears to be Korean (and not even cohesive Korean), do? He uses the View | Encoding menu and tries a bunch of different encodings (there are at least a dozen for Eastern European languages) until the picture comes in clearer. If he knew to do that, which most people don’t.

For the latest version of CityDesk, the web site management software published by my company, we decided to do everything internally in UCS-2 (two byte) Unicode, which is what Visual Basic, COM, and Windows NT/2000/XP use as their native string type. In C++ code we just declare strings as wchar_t (“wide char”) instead of char and use the wcs functions instead of the str functions (for example wcscatand wcslen instead of strcat and strlen). To create a literal UCS-2 string in C code you just put an L before it as so: L”Hello”.

When CityDesk publishes the web page, it converts it to UTF-8 encoding, which has been well supported by web browsers for many years. That’s the way all 29 language versions of Joel on Software are encoded and I have not yet heard a single person who has had any trouble viewing them.

This article is getting rather long, and I can’t possibly cover everything there is to know about character encodings and Unicode, but I hope that if you’ve read this far, you know enough to go back to programming, using antibiotics instead of leeches and spells, a task to which I will leave you now.

La codifica dei caratteri… questa sconosciuta!

Strumento per lo Stress test del tuo sito? Gatling!

Questo strumento ha una funzione molto carina che permette di registrare gli scenari!

Ad sempio se vogliamo simulare un acquisto sul nostro sito di e-commerce e poi fare uno stress test su questo percorso utente lo possiamo fare semplicemente con 2 click!

Inoltre una volta eseguita una simulazione Gatling prepara in automatico tutta una serie di report in html5 per analizzare le performance della simulazione stessa.

http://gatling.io/

gatling

Strumento per lo Stress test del tuo sito? Gatling!