Install pgAdmin 4 on Ubuntu 16.04

or pgAdmin 4 v1.2 on Ubuntu 16.04, according to the download page and desktop deployment:

Install dependencies, create a virtual environment, download, install & configure

sudo apt-get install virtualenv python-pip libpq-dev python-dev

cd
virtualenv pgadmin4
cd pgadmin4
source bin/activate

wget https://ftp.postgresql.org/pub/pgadmin3/pgadmin4/v1.2/pip/pgadmin4-1.2-py2-none-any.whl

pip install pgadmin4-1.2-py2-none-any.whl

Configure

Write the SERVER_MODE = False in lib/python2.7/site-packages/pgadmin4/config_local.py to configure to run in single-user mode:

echo "SERVER_MODE = False" >> lib/python2.7/site-packages/pgadmin4/config_local.py

Run

python lib/python2.7/site-packages/pgadmin4/pgAdmin4.py

Access at http://localhost:5050

 

Other option is to use docker and a docker image provided by thaJeztah – https://github.com/thaJeztah/pgadmin4-docker

$ docker run --rm -p 5050:5050 thajeztah/pgadmin4

 

Source:
http://askubuntu.com/questions/831262/how-to-install-pgadmin-4-in-desktop-mode-on-ubuntu-16-04

 

Send mail in Apache TomEE

Make javax.mail and impl as maven scope provided, as TomEE comes with geronimo mail implementation.

Also use resource injection of mail session:

https://dwuysan.wordpress.com/2016/02/17/sending-email-on-apache-tomee/

(If mail account is Gmail, turn ON ‘Allow less secure apps’)

In Tomee.xml:

<?xml version="1.0" encoding="UTF-8"?>
<tomee>
    <!-- see http://tomee.apache.org/containers-and-resources.html -->
 
    <!-- activate next line to be able to deploy applications in apps -->
    <!-- <Deployments dir="apps" /> -->
    <Resource id="tomee/mail/GMailSMTP" type="javax.mail.Session">
    mail.smtp.host=smtp.gmail.com
    mail.smtp.starttls.enable=true
    mail.smtp.port=587
    mail.transport.protocol=smtp
    mail.smtp.auth=true
    mail.smtp.user=<!-- your email address -->
    password=<!-- your password, and not 'mail.smtp.password' -->
    </Resource>
</tomee>

Injection of resource:

@Stateless
@LocalBean
@Path(value = "workline")
public class MailService {
    @Resource(mappedName = "java:comp/env/tomee/mail/GMailSMTP")
    private Session smtpSession;
 
    public boolean sendMail() throws NamingException {
        final Message message = new MimeMessage(this.smtpSession);
        try {
            message.setRecipients(Message.RecipientType.TO, new Address[]{
                new InternetAddress("someone@gmail.com")
            });
            message.setSubject("Email from TomEE");
            message.setSentDate(new Date());
            message.setText("Email from TomEE");
            Transport.send(message);
        } catch (Exception e) {
            Logger.getLogger(this.getClass().getName()).log(Level.SEVERE, null, e);
            return false;
        }
        return true;
    }
}

A raw implementation:

package com.origami.rpp.util;

import com.origami.config.SisVars;
import java.io.File;
import java.util.ArrayList;
import java.util.Date;
import java.util.List;
import java.util.Properties;
import java.util.logging.Level;
import java.util.logging.Logger;
import javax.mail.Message;
import javax.mail.MessagingException;
import javax.mail.Multipart;
import javax.mail.Session;
import javax.mail.Transport;
import javax.mail.internet.InternetAddress;
import javax.mail.internet.MimeBodyPart;
import javax.mail.internet.MimeMessage;
import javax.mail.internet.MimeMultipart;

/**
 *
 * @author Origami
 */
public class Email {

    protected List<File> adjuntos = new ArrayList<>();
    protected String usuarioCorreo = SisVars.correo;
    protected String password = SisVars.pass;
    protected String rutaArchivo1;
    protected String nombreArchivo1;
    protected String rutaArchivo2;
    protected String nombreArchivo2;
    protected String destinatario;
    protected String copiaOcultaBCC;
    protected String copiaCC;
    protected String asunto;
    protected String mensaje;

    public Email(String destinatario, String copiaOcultaBCC, String copiaCC, String asunto, String mensaje, List<File> files) {
        this.destinatario = destinatario;
        this.copiaOcultaBCC = copiaOcultaBCC;
        this.copiaCC = copiaCC;
        this.asunto = asunto;
        this.mensaje = mensaje;
        if(files != null) this.adjuntos = files;
    }

    public Email(String destinatario, String asunto, String mensaje, List<File> files) {
        this(destinatario, null, null, asunto, mensaje, files);
    }

    public boolean sendMail() {
        try {
            //INGRESO DE LAS POROPIEDADES DE LA CONEXION
            Properties props = new Properties();
            props.setProperty("mail.transport.protocol", "smtp");
            props.setProperty("mail.smtp.host", SisVars.smtp_Host);
            props.setProperty("mail.smtp.starttls.enable", "true");
            props.setProperty("mail.smtp.port", SisVars.smtp_Port);
            props.setProperty("mail.smtp.user", usuarioCorreo);
            props.setProperty("mail.smtp.auth", "true");
            //INSTANCIA DE LA SESSION
            Session session = Session.getInstance(props, null);
            //CUERPO DEL MENSAJE
            MimeMessage mimeMessage = new MimeMessage(session);
            mimeMessage.setFrom(new InternetAddress(usuarioCorreo, "EMPRESA PUBLICA MUNICIPAL DEL REGISTRO DE LA PROPIEDAD DEL CANTON PORTOVIEJO EP"));
            mimeMessage.setSubject(asunto);
            mimeMessage.setSentDate(new Date());
            mimeMessage.addRecipients(Message.RecipientType.TO, InternetAddress.parse(destinatario));
            if (copiaOcultaBCC != null) {
                mimeMessage.addRecipients(Message.RecipientType.BCC, InternetAddress.parse(copiaOcultaBCC));
            }
            if (copiaCC != null) {
                mimeMessage.addRecipients(Message.RecipientType.CC, InternetAddress.parse(copiaCC));
            }
            //TEXTO DEL MENSAJE
            MimeBodyPart texto = new MimeBodyPart();
            texto.setText(mensaje);
            //CONTENEDOR DE LAS PARTES
            Multipart multipart = new MimeMultipart();
            multipart.addBodyPart(texto);
            //ADJUNTAR LOS ARCHIVO EN PARTES
            MimeBodyPart file;
            for (File f : adjuntos ) {
                file = new MimeBodyPart();
                file.attachFile(f.getAbsolutePath());
                multipart.addBodyPart(file);
            }
            //AGREGAR MULTIPART EN CUERPO DEL MENSAJE
            mimeMessage.setContent(multipart);
            // ENVIAR MENSAJE
            Transport transport = session.getTransport("smtp");
            transport.connect(usuarioCorreo, password);
            transport.sendMessage(mimeMessage, mimeMessage.getAllRecipients());
            transport.close();
            
            
            
        } catch (MessagingException ex) {
            Logger.getLogger(Email.class.getName()).log(Level.SEVERE, null, ex);
        } catch (Exception e) {
            Logger.getLogger(Email.class.getName()).log(Level.SEVERE, null, e);
        }
        return true;
    }

}

Damaged or corrupt SQLite database recovery

Download sqlite binaries from:
http://www.sqlite.org/download.html

I will use sqlite-tools-win32 in Windows 10:
http://www.sqlite.org/2017/sqlite-tools-win32-x86-3160200.zip

Following the instructions from spiceworks article  , I check the file integrity:

# (I use PowerShell as administrator)
.\sqlite3.exe .\jalcivar_db

# then check:
pragma integrity_check;

# exit
.quit

If I have error messages I can go to Next Step

# Extract an SQL dump:
echo .dump | .\sqlite3.exe .\jalcivar_db > jalcivar_db.sql

# With Notepad++, encode .sql to UTF-8 w/o BOM

# Create an new sqlite db file:
.\sqlite3.exe -init .\jalcivar_db.sql .\jalcivar_db.sqlite

.quit

Postgresql restore dump objects from list

pg_restore command brings an parameter for to define the objects list for restore from a dump.

Use -l for generate objects list from dump:

--list  [or]  -l

Create a file with that list, delete some objects, and use -L for use the new list for restore:

--use-list=list-file  [or]  -L list-file

https://www.postgresql.org/docs/current/static/app-pgrestore.html
http://stackoverflow.com/a/31043192/3662679

Fix JMX RMI port configuration

Set the following ports in java command line:

com.sun.management.jmxremote.port
com.sun.management.jmxremote.rmi.port

Example on unique port 9229:

-Djava.net.preferIPv4Stack=true 
-Dcom.sun.management.jmxremote
-Dcom.sun.management.jmxremote.port=9229 
-Dcom.sun.management.jmxremote.rmi.port=9229
-Djava.rmi.server.hostname=190.57.138.218 
-Dcom.sun.management.jmxremote.authenticate=false 
-Dcom.sun.management.jmxremote.ssl=false

In Tomcat / TomEE set them in bin/setenv.sh file:

export CATALINA_OPTS="$CATALINA_OPTS -server -Xmx6000m -Xms512m -XX:MaxPermSize=400m -Djava.net.preferIPv4Stack=true -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=9229 -Dcom.sun.management.jmxremote.rmi.port=9229 -Djava.rmi.server.hostname=190.57.138.218 -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false"

export JAVA_OPTS="-client -Xms8m -Xmx128m -XX:MaxPermSize=64m -Djava.net.preferIPv4Stack=true"

Due to a bug this doesnt works in java versions <= 1.7.0_51. Please upgrade the java/JDK version.

Resources:

http://stackoverflow.com/questions/20884353/why-java-opens-3-ports-when-jmx-is-configured/21552812#21552812

Postgis create extension error: cannot find libhdf5.so.6

This error is produced by an unlinked version of libhdf5, actual is 8, postgis links 6.

The solution is to add version 6 to ldconfig (Centos 6):

# find the lib if exist in OS filesystem
# returns: /usr/lib64/openmpi/lib/libhdf5.so.6
find / -name libhdf5.so.6

cd /etc/ld.so.conf.d/
nano postgis-fix.conf

# add to file:
/usr/lib64/openmpi/lib/

# Reload libs configs:
ldconfig

# check the lib:
ldconfig -p | grep libhdf5

References:
http://gis.stackexchange.com/questions/31177/error-creating-a-spatial-database-error-could-not-load-library-usr-pgsql-9

Install Postgresql 9.6 on CentOS 6

# Get correct RPM from http://yum.postgresql.org/repopackages.php

cd /home/
mkdir installers
cd installers

wget https://download.postgresql.org/pub/repos/yum/9.6/redhat/rhel-6-x86_64/pgdg-centos96-9.6-3.noarch.rpm

rpm -i pgdg-centos96-9.6-3.noarch.rpm

# Install EPEL repos as:
# http://www.tecmint.com/how-to-enable-epel-repository-for-rhel-centos-6-5/

wget http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
rpm -ivh epel-release-6-8.noarch.rpm

yum install postgresql96 postgresql96-contrib postgresql96-server postgresql96-devel postgresql96-plpython postgis2_96 postgis2_96-devel postgis2_96-client postgis2_96-utils

service postgresql-9.6 initdb 'en_US.UTF8'
# or
# service postgresql-9.6 initdb 'es_ES.UTF-8'

chkconfig postgresql-9.6 on

# Darle password al usuario postgres linux/db:

passwd postgres
service postgresql-9.6 restart
su postgres
psql -c "ALTER USER postgres WITH PASSWORD 'xxxxxx'" -d template1
exit

nano /var/lib/pgsql/9.6/data/postgresql.conf

# En la línea #listen_addresses quitamos el carácter # (habilitar) y reemplazamos la palabra localhost por el carácter *
# Configure la cantidad máxima de conexiones (max_connection) a 200.
# Guarde y cierre el archivo con Crtl+o, Enter, y Ctrl+x

nano /var/lib/pgsql/9.6/data/pg_hba.conf

# Definir en la seccion #IPv4 local connections :
host    all             all             127.0.0.1/32            password
host    all             all             0.0.0.0/0               password

service postgresql-9.6 restart

# firewall rule for 5432 port
nano /etc/sysconfig/iptables
# Then add this entry before -A INPUT -j REJECT line
-A INPUT -p tcp -m state --state NEW -m tcp --dport 5432 -j ACCEPT
# restart firewall
service iptables restart
# verify open ports:
netstat -tulpn
# Make sure iptables is allowing port 80 / 110 / 143 connections:
iptables -L -n

# Disable enforce
nano /etc/selinux/config
# change the SELINUX line to 
SELINUX=disabled

Set session variable in postgresql

Before version 9.2, you needed to add your custom class variable to custom_variable_classes parameter in postgresql.conf, like:

custom_variable_classes = 'myapp'

In 9.2, this requirement has been removed:

Remove the custom_variable_classes parameter (Tom Lane).
The checking provided by this setting was dubious. Now any setting can be prefixed by any class name.

So, since 9.2 you can just set your custom class variable as you are doing currently, no need to worry about changing postgresql.conf.

-- set_config(var_name, value, is_only_for_local_transaction)
-- if local_transaction is false, then var is setted for entire connection/session
-- Also:  set session "myapp.user" = 'admin';
SELECT set_config('myapp.user', 'admin', FALSE);
-- get var:
SELECT current_setting('myapp.user');

With SET LOCAL (Only in transaction):
Specifies that the command takes effect for only the current transaction. After COMMIT or ROLLBACK, the session-level setting takes effect again. Issuing this outside of a transaction block emits a warning and otherwise has no effect.

-- Also:  set LOCAL "myapp.user" = '22';
SELECT set_config('myapp.user', 'admin', TRUE);
-- get var:
SELECT current_setting('myapp.user');

Source:
https://www.postgresql.org/docs/9.5/static/functions-admin.html#FUNCTIONS-ADMIN-SET-TABLE
http://dba.stackexchange.com/questions/97095/set-session-custom-variable-to-store-user-id
https://www.postgresql.org/docs/current/static/sql-set.html

Replacing EAV with JSONB in PostgreSQL

TL;DR: JSONB has potential for greatly simplifying schema design without sacrificing query performance.

Introduction

It must be one of the oldest use cases in the world of relational databases: you have an entity, and you need to store certain properties of this entity. But, not all entities have the same set of properties, and properties will be added in the future.

The most naive way to solve this problem would be to create a column in your table for each property, and just fill in the ones that are relevant. Great! Problem solved. Until your table contains millions of records and you need to add a new non-null property.

Enter Entity-Attribute-Value. I’ve seen this pattern in almost every database I’ve worked with. One table contains the entities, another table contains the names of the properties (attributes) and a third table links the entities with their attributes and holds the value. This gives you the flexibility for having different sets of properties (attributes) for different entities, and also for adding properties on the fly without locking your table for 3 days.

Nonetheless, I wouldn’t be writing this post if there were no downsides to this approach. Selecting one or more entities based on 1 attribute value requires 2 joins: one with the attribute table and one with the value table. Need entities bases on 2 attributes? That’s 4 joins! Also, the properties usually are all stored as strings, which results in type casting, both for the result as for the WHERE clause. If you write a lot of ad-hoc queries, this is very tedious.

Despite these obvious shortcomings, EAV has been used for a long time to solve this kind of problem. It was a necessary evil, and there just was no better alternative. But then PostgreSQL came along with a new feature…

Starting from PostgreSQL 9.4, a JSONB datatype was added for storing binary JSON data. While storing JSON in this format usualy takes slightly more space and time then plain text JSON, executing operations on it is much faster. Also JSONB supports indexing, making querying it even faster.

This new data type enables us to replace the tedious EAV pattern by adding a single JSONB column to our entity table, greatly simplifying the database design. But many argue that this must come with a performance cost. That’s why I created this benchmark.

Test database setup

For this comparison, I created a database on a fresh PostgreSQL 9.5 installation on an 80 $ DigitalOcean Ubuntu 14.04 box. After tuning some settings in postgresql.conf, I ran this script using psql.

The following tables were created for representing the data as EAV.

CREATE TABLE entity ( 
  id           SERIAL PRIMARY KEY, 
  name         TEXT, 
  description  TEXT
);
CREATE TABLE entity_attribute (
  id          SERIAL PRIMARY KEY, 
  name        TEXT
);
CREATE TABLE entity_attribute_value (
  id                  SERIAL PRIMARY KEY, 
  entity_id           INT    REFERENCES entity(id), 
  entity_attribute_id INT    REFERENCES entity_attribute(id), 
  value               TEXT
);

The table below represents the same data, but with the attributes in a JSONB column which I called properties.

CREATE TABLE entity_jsonb (
  id          SERIAL PRIMARY KEY, 
  name        TEXT, 
  description TEXT,
  properties  JSONB
);

A lot simpler, isn’t it?

Then, I loaded the exact same data for both patterns for a total of 10 million entities in the form of the one below. This way, we have some different data types among the attribute set.

{
  id:          1
  name:        "Entity1"
  description: "Test entity no. 1"
  properties:  {
    color:        "red"
    lenght:       120
    width:        3.1882420
    hassomething: true
    country:      "Belgium"
  } 
}

So, we now have the same data stored in both formats. Let’s start comparing!

Query aesthetics

Earlier it was already clear that the design of the database was greatly simplified by using a JSONB column for the properties instead of using a 3 tabes EAV model. But does this also reflect in the queries?

Updating a single entity property looks like this:

-- EAV
UPDATE entity_attribute_value 
SET value = 'blue' 
WHERE entity_attribute_id = 1 
  AND entity_id = 120;

-- JSONB
UPDATE entity_jsonb 
SET properties = jsonb_set(properties, '{"color"}', '"blue"') 
WHERE id = 120;

Admittedly, the latter doesn’t look simpler. To update the property in the JSONB object, we have to use the jsonb_set() function, and we have to pass our new value as a JSONB object. However, we don’t need to know any id’s upfront. When you look at the EAV example, you have to know both the entity_id and the entity_attribute_id to perform the update. If you want to update a property in the JSONB column based on the entity name, go ahead, it’s all in the same row.

Now, let’s select that entity we just updated, based on its new color:

-- EAV
SELECT e.name 
FROM entity e 
  INNER JOIN entity_attribute_value eav ON e.id = eav.entity_id
  INNER JOIN entity_attribute ea ON eav.entity_attribute_id = ea.id
WHERE ea.name = 'color' AND eav.value = 'blue';

-- JSONB
SELECT name 
FROM entity_jsonb 
WHERE properties ->> 'color' = 'blue';

I think we can agree that the second is both shorter (no joins!) and more pleasing to the eye. A clear win for JSONB! Here, we use the JSON ->> operator to get the color as a text value from the JSONB object. There is also a second way to achieve the same result in the JSONB model, using the @> containment operator:

-- JSONB 
SELECT name 
FROM entity_jsonb 
WHERE properties @> '{"color": "blue"}';

This is a bit more complex: we check if the JSON object in the properties column contains the object on the right of the operator. Less readable, more performant (see later).

The simplification of using JSONB is even stronger when you need to select multiple properties at once. This is where the JSONB approach really shines: we just select the properties as extra columns in our result set, without the need for joins.

-- JSONB 
SELECT name
  , properties ->> 'color'
  , properties ->> 'country'
FROM entity_jsonb 
WHERE id = 120;

With EAV, you would need 2 joins per property you want to query.

In my opinion, the queries above show a great simplification in database design. If you want more examples on how to query JSONB data, check out this post.

Now, let’s talk performance.

Performance

To compare performance, I used EXPLAIN ANALYSE on the queries above to see how long they take. I did each query at least three times, because the first time the query planner needs some more time. At first, I executed the queries without any indexes. This will obviously be in the advantage of JSONB, as the joins required for EAV can’t make use of index scans (the foreign key fields aren’t indexed). After that, I created an index on the 2 foreign key columns in the EAV value table, and also a GIN index on the JSONB column

For updating the data, this resulted in these execution times (in ms). Note that the scale is logarithmic:

Update results

Here, we see that the JSONB is much (>50000x) faster than EAV when not using any indexes, for the reason mentioned above. When we index the foreing key columns the difference is almost eliminated, but JSONB is still 1.3x faster than EAV. Notice that the index on the JSONB column does not have any effect here, as we don’t use the properties column in the criteria.

For selecting data based on a property value, we get the following results (normal scale):

Update results

Here we can see that JSONB was again faster without indexes for EAV, but when the index is used EAV is the fastest. But then I noticed the times for the JSONB queries were the same, pointing me to the fact that the GIN index is not used. Apparently, when you use a GIN index on the full properties column, it only has effect when using the containment (@>) operator. I added this to the benchmark and it had a huge effect on the timing: only 0.153ms! That’s 15000x faster then EAV, and 25000x faster than the ->> operator.

For me, that’s fast enough.

Table size

Let’s compare the sizes of both approaches. In psql we can show the size of all tables and indexes using the \dti+ command:

test-# \dti+
                                                     List of relations
                      Name                      | Type  |         Table          |  Size   |
------------------------------------------------+-------+------------------------+---------+
 entity                                         | table |                        | 730 MB  |
 entity_attribute                               | table |                        | 48 kB   |
 entity_attribute_name_idx                      | index | entity_attribute       | 16 kB   |
 entity_attribute_name_key                      | index | entity_attribute       | 16 kB   |
 entity_attribute_pkey                          | index | entity_attribute       | 16 kB   |
 entity_attribute_value                         | table |                        | 2338 MB |
 entity_attribute_value_entity_attribute_id_idx | index | entity_attribute_value | 1071 MB |
 entity_attribute_value_entity_id_idx           | index | entity_attribute_value | 1071 MB |
 entity_attribute_value_pkey                    | index | entity_attribute_value | 1071 MB |
 entity_jsonb                                   | table |                        | 1817 MB |
 entity_jsonb_pkey                              | index | entity_jsonb           | 214 MB  |
 entity_jsonb_properties_idx                    | index | entity_jsonb           | 104 MB  |
 entity_pkey                                    | index | entity                 | 214 MB  |
(13 rows)

For the EAV model, the tables add up to 3068MB and the indexes add up to 3427MB, resulting in a total 6.43GB. On the other hand, the JSONB model uses 1817MB for the table, and 318MB for the indexes, totalling 2,08GB. Thats 3x less. This suprised me a bit, because we store the property names in each JSONB object. But when you think about it, in EAV we store 2 integer foreign keys per attribute value, resulting in 8 bytes of extra data. Also, in EAV all property values are stored as text, while JSONB will use numeric and boolean values internally where possible, resulting in less space.

Conclusion

In general, I think storing entity properties in JSONB format can greatly simplify your database design and maintenance. If, like me, you do a lot of ad-hoc querying, having everything stored in the same table as the entity is really useful. The fact that it simplifies interacting with your data is already a plus, but the resulting database is also 3x smaller and from my tests it seems that the performance penalties are very limited. In some cases JSONB even performes faster then EAV, which makes it even better.

However, this benchmark does of course not cover all aspects (like entities with a very large number of properties, a large increase in the number of properties of existing data, …), so if you have any suggestions on how to improve it, please feel free to leave a comment!

Repost from:
http://coussej.github.io/2016/01/14/Replacing-EAV-with-JSONB-in-PostgreSQL/