Please log in or register to answer this question.

2 Answers

0 like 0 dislike
answered by (4k points)  
edited by

How to check partitioned table size including indexes and partitions?

Table size with partitions:

The following SQL gives you employee_dailly table size, which includes partitions.

select schemaname,tablename,round(sum(pg_total_relation_size(schemaname || '.' || partitiontablename))/1024/1024) "MB"
from pg_partitions where tablename='employee_daily' group by 1,2;
schemaname | tablename | MB
-----------+----------------+-----
public | employee_daily | 254

How do I get help on syntax to alter table?

In psql session type \h alter table which will display the syntax:

gpdb=# \h alter table

How to connect in utility mode?

From master host

PGOPTIONS='-c gp_session_role=utility' psql -p <port> -h <hostname>

Where:

port is segment/ master database port.

hostname is segment/master hostname.

 

Where/How to find db logs?

Master:

Master gpdb logfile is located in the $MASTER_DATA_DIRECTORY/pg_log/ directory and the file name depends on the database "log_filename" parameter.$MASTER_DATA_DIRECTORY/pg_log/gpdb-yyyy-mm-dd_000000.csv -->Log file format with default installation.

~gpadmin/gpAdminLogs/ -->gpstart,gpstop,gpstate and other utility logs.

Segments:

primary segments run below SQL to see logfile location:

select dbid,hostname,datadir||'/pg_log' from gp_configuration where content not in (-1) and isprimary is true;

Miror Segments run below SQL to see logfile location:

select dbid,hostname,datadir||'/pg_log' from gp_configuration where content not in (-1) and isprimary is false;

How to see the list of available functions in Greenplum DB?

\df schemaname.functionname (schemaname and function name support wildcard characters)

test=# \df pub*.*test*
List of functions
Schema | Name | Result data type | Argument data types
--------+-------------+------------------+---------------------
public | bugtest | integer |
public | test | boolean | integer
public | test | void |
(3 rows)

How to check whether Greenplum server is up and running?

The gpstate is the utility to check gpdb status.

Use gpstate -Q to show a quick status. Refer to gpstate --help for more options.

Sample output:

[gpadmin@stinger2]/export/home/gpadmin>gpstate -Q
gpadmin-[INFO]:-Obtaining GPDB array type, [Brief], please wait...
gpadmin-[INFO]:-Obtaining GPDB array type, [Brief], please wait...
gpadmin-[INFO]:-Quick Greenplum database status from Master instance only
gpadmin-[INFO]:----------------------------------------------------------
gpadmin-[INFO]:-GPDB fault action value = readonly
gpadmin-[INFO]:-Valid count in status view = 4
gpadmin-[INFO]:-Invalid count in status view = 0
gpadmin-[INFO]:----------------------------------------------------------

 

How to create a Database?

There are two ways to create gpdb database using psql session or the Greenplum createdb utility.Using psql session:

gpdb=# \h create database
Command: CREATE DATABASE
Description: create a new database

Syntax:

CREATE DATABASE name
[ [ WITH ] [ OWNER [=] dbowner ]
[ TEMPLATE [=] template ]
[ ENCODING [=] encoding ]
[ TABLESPACE [=] tablespace ]
[ CONNECTION LIMIT [=] connlimit ] ]

Using createdb utility:

Usage: $GPHOME/bin/createdb --help
createdb [OPTION]... [DBNAME] [DESCRIPTION]
Options:
-D, --tablespace=TABLESPACE default tablespace for the database
-e, --echo show the commands being sent to the server
-E, --encoding=ENCODING encoding for the database
-O, --owner=OWNER database user to own the new database
-T, --template=TEMPLATE template database to copy
--help show this help, then exit
--version output version information, then exit

 

How do I get a list of databases in a greenplum cluster?

gpdb=# \ l (lowercase letter "l")
List of databases
Name | Owner | Encoding
------{}----------
gpdb | gpadmin | UTF8
gpperfmon | gpadmin | UTF8
postgres | gpadmin | UTF8
template0 | gpadmin | UTF8
template1 | gpadmin | UTF8

Check below SQL for more details on dbs.
gpdb=# select * from pg_database;

 

How to delete/drop an existing database in Greenplum?

gpdb=# \h DROP Database
Command: DROP DATABASE
Description: remove a database
Syntax:DROP DATABASE [ IF EXISTS ] name 

Also check dropdb utility:

 $GPHOME/bin/dropdb --help
dropdb removes a PostgreSQL database.
Usage:
dropdb [OPTION]... DBNAME

 

Where can I get help on postgres psql commands?

In psql session

"\ ?" - for all psql session help

"\h <SQL Command> " For any SQL syntax help.

 

gpstart failed what should I do?

Check gpstart logfile in ~gpadmin/gpAdminLogs/gpstart_yyyymmdd.log

Take a look at the pg start log file for more details in $MASTER_DATA_DIRECTORY/pg_log/startup.log

 

Why do we need gpstop -m and gpstart -m?

The gpstart -m command allows you to start the master only and none of the data segments and is used primarily by support to get system level information/configuration. An end user would not regularly or even normally use it.

 

What is the procedure to get rid of mirror segments?

There are no utilities available to remove mirrors from Greenplum. You need to make sure all primary segments are good then you can remove the mirror configuration from gp_configuration in 3.x.

 

How to run gpcheckcat?

The gpcheckcat tool is used to check catalog inconsistencies between master and segments. It can be found in the $GPHOME/bin/lib directory:

Usage: gpcheckcat <option> [dbname]
-?
-B parallel: number of worker threads
-g dir : generate SQL to rectify catalog corruption, put it in dir
-h host : DB host name
-p port : DB port number
-P passwd : DB password
-o : check OID consistency
-U uname : DB User Name
-v : verbose
Example: gpcheckcat gpdb >gpcheckcat_gpdb_logfile.log

 

What is gpdetective and how do I run it in Greenplum?

The gpdetective utility collects information from a running Greenplum Database system and creates a bzip2-compressed tar output file. This output file helps with the diagnosis of Greenplum Database errors or system failures. for more details check help.

gpdetective --help

 

How to delete a standby?

To remove the currently configured standby master host from your Greenplum Database system, run the following command in the master only:

# gpinitstandby -r

How to re-sync a standby?

Use this option if you already have a standby master configured, and just want to resynchronize the data between the primary and backup master host. The Greenplum system catalog tables will not be updated.

# gpinitstandby -n (resynchronize)

 

How to recover an invalid segment?

Use the gprecoverseg tool, which will recognize which segments need recovery and will initialize recovery.

3.3.x:

o Without "-F" option - First files will be compared, difference found and only different files will be synched (the first stage could last a long time if there are too many files in the data directory)

o With "-F" option - Entire data directory will be resynched.

4.0.x:

o Without "-F" option - The change tracking log will be sent and applied to the mirror.

o With "-F" option - Entire data directory will be resynched.

 

 How to add mirrors to the array?

The gpadd mirrors utility configures mirror segment instances for an existing Greenplum Database system that was initially configured with primary segment instances only.

For more details check help.

# gpaddmirrors --help

 

How to see primary to mirror mapping?

From database catalog following query list configuration on content ID, you can figure out primary and mirror for each content.

gpdb=# select * from gp_configuration order by content.

Note: starting from GPDB 4.x, gp_segment_configuration table is used instead.

gpdb=# select * from gp_segment_configuration order by dbid;
0 like 0 dislike
answered by (4k points)  

What are major differences between Oracle and Greenplum? 

Oracle is relational database. Greenplum is MPP nature. Greenplum is shared nothing architecture. There are many other differences in terms of functionality and behaviour.

What is good and bad about the Greenplum, compared to Oracle and Greenplum?

Greenplum is built on top of Postgresql . It is shared nothing, MPP architecture best for data warehousing env. Good for big data analytics purpose.

Oracle is an all purpose database.

How to find errors / fatal from log files?

grep for ERRORS, FATAL, SIGSEGV in pg_logs directory.

What is vacuum and when should I run this?

VACUUM reclaims storage occupied by deleted tuples. In normal GPDB operation, tuples that are deleted or obsoleted by an update are not physically removed from their table. They remain present on disk until a VACUUM is done. Therefore, it is necessary to do VACUUM periodically, especially on frequently-updated table.

What is difference between vacuum and vacuum full?

Unless you need to return space to the OS so that other tables or other parts of the system can use that space, you should use VACUUM instead of VACUUM FULL.

VACUUM FULL is only needed when you have a table that is mostly dead rows, that is, the vast majority of its contents have been deleted. Even then, there is no point using VACUUM FULL unless you urgently need that disk space back for other things or you expect that the table will never again grow to its past size. Do not use it for table optimization or periodic maintenance as it is counterproductive.

What is Analyze and how frequence should I run this?

ANALYZE collects statistics about the contents of tables in the database, and stores the results in the system table pg_statistic. Subsequently, the query planner uses these statistics to help determine the most efficient execution plans for queries.

It is a good idea to run ANALYZE periodically, or just after making major changes in the contents of a table. Accurate statistics will help the query planner to choose the most appropriate query plan, and thereby improve the speed of query processing. A common strategy is to run VACUUM and ANALYZE once a day during a low-usage time of day.

What is resource queues?

Resource queues are used to manager Greenplum database workload management. All user / queries can be prioritized using Resource queues. Refer Admin guide for more details.

What is gp_toolkit?

The gp_toolkit is a database schema, which has many tables, views and functions to better manage Greenplum Database when DB is up. In 3.x earlier versions it was referred to as gp_jetpack.

How to generate DDL for a table?

Use pg_dump utility to generate DDL.

Example:

pg_dump -t njonna.accounts -s -f ddl_accounts.sql
Where:
-f ddl_accounts.sql is output file.
-t njonna.accounts is table name with schema njonna.
-s dump only schema no data

 

What are the tools available in Greenplum to take backup and restores?

For non-parallel backups:

Use postgres utililities (pg_dump, pg_dumpall for backup, and pg_restore for restore).

Another useful command for getting data out of database is the COPY <TABLE> to <File>.

For parallel backups:

gp_dump and gpcrondump for backups and gp_restore for restore process.

 

How do I clone my production databaes to PreProd / QA environment?

If Prod and QA on same GPDB cluster, use

CREATE database <Clone_DBname> template <Source_DB>.

If Prod and QA are on different clusters, use backup and restore utilities.

 

What is difference between pg_dump and gp_dump?

pg_dump - Non-parallel backup utility, you need big file system where backup will be created in the master node only.

gp_dump - Parallel backup utility. Backup will be created in master and segments file system.

 

What is gpcrondump?

A wrapper utility for gp_dump, which can be called directly or from a crontab entry.

Example: gpcrondump -x <database_name>

 

What are the backup options available at OS level?

Solaris: zfs snapshots at file system level.

All OS: gpcrondump / gp_dump.

 

Greenplum query is running very slow, it was running fine yesterday what should I do?

-Check that your connection to the Greenplum cluster is still good if you are using a remote client. You can do this by running the SQL locally to the GP cluster.

-Check that the system tables and user tables involved are not bloated or skewed. Read jetpack or Greenplum toolkit documentation about how to do this.

-Check with your DBA that the Greenplum interconnect is still performing correctly.

This can be done by checking for dropped packets on the interconnect "netstat -i" and by running gpcheckperf.It is also possible that a segment is experiencing hardware problems, which can be found in the output of dmesg or in

/var/log/messages* (Linux) and /var/adm/messages* (Solaris).

 

 

 

How to turn on timing, and checking how much time a query takes to execute?

You can turn in timing per session before you run your SQL with the \timing command.

You can run explain analyze against your SQL statement to get the timing.

How to trace child processes on segment server?

When session start in master and segments, all the child processes in segments will be identified with master session_id connection string (con+sess_id).

For example:

gpdb=# select * from pg_Stat_activity;
datid | datname | procpid | sess_id |.. ..
-------+---------+---------+---------+
16986 | gpdb | 18162 | 76134 | .. ..
In all segments child processes for session 76134:
[gpadmin@stinger2]/export/home/gpadmin/gp40>gpssh -f host_file /usr/ucb/ps -auxww |grep con76134
[stinger2] gpadmin 18162 1.7 6.0386000124480 ? S 09:57:55 0:04 postgres: port 4000, gpadmin gpdb [local] con76134 [local] cmd3 CREATE DATABASE.......................................
[stinger2] gpadmin 18625 0.3 2.726056455932 ? S 10:01:56 0:01 postgres: port 40000, gpadmin gpdb 10.5.202.12(18864) con76134 seg0 cmd4 MPPEXEC UTILITY...............................
[stinger2] gpadmin 18669 0.0 0.1 3624 752 pts/2 S 10:02:36 0:00 grep con76134
[stinger3] gpadmin 22289 0.8 9.4531860196404 ? S 09:36:20 0:05 postgres: port 40000, gpadmin gpdb 10.5.202.12(18866) con76134 seg1 cmd4 MPPEXEC UTILITY...............................

 

How to check if my session queries are running or waiting on locks?

Check "waiting" column in pg_stat_activity and "granted" column in pg_locks for any object level locks.

 

What kind of locks should we focus on MPP system when system is slow /hung?

Locks that are held for a very long time and multiple other queries are waiting for that lock also.

How do I monitor user activity history in Greenplum database ?

Use Greenplum performance monitor (gpperfmon), which has GUI to monitor and query performance history.

 

What is Greenplum performance monitor and how to install ?

Its a monitoring tool that collects statistics on system and query performance and builds historical data.

 

when the client connects does he connect to the Master or segment node?

Master.

 

Can you explain the process of data migration from Oracle to Greenplum?

There are many ways. Simplest steps are Unload data into csv files, create tables in greenplum database corresponding to Oracle, Create external table,  start gpfdist pointing to external table location, Load data into greenplum. You can also use gpload utility. Gpload creates external table at runtime.

 

Which command would you use to backup a database?

gp_dump,  gpcrondump, pg_dump, pg_dumpall, copy

 

When you restore from a backup taken from gp_dump, can you import a table?

NO. Yes if during the gp_dump you backed up one table only.

 

Can you open and view a dump file?

Which option would you use to export the ddl of the database or table?

-s (-s | --schema-only Dump only the object definitions (schema), not data.)

 

When a user submits a query, where does it run in Master or segment nodes?

Segment nodes

If you configure your with Master and Segment nodes, where would the data reside?

Segment nodes

How would go about query tuning?

  1. look at the query plan
  2. Look at the stats of the table/tables in the query
  3. look at the table distribution keys and joins in the query
  4. look at the network performance
  5. look at the resource queues
  6. look at the interconnect performance
  7. look at the join order of tables in the query
  8. look at the the query itself i.e. if it can be written in more efficient way

What would you do when a user or users are complaining that a particular query is running slow?

  1.  look at the query plan
  2.  Look at the stats of the table/tables in the query
  3.  look at the table distribution keys and joins in the query
  4.  look at the network performance
  5. look at the resource queues
  6.  look at the interconnect performance
  7.  look at the join order of tables in the query
  8.  look at the the query itself i.e. if it can be written in more efficient way

What would you do to gather statistics in the database? as well as reclaim the space?

A VACUUM FULL , CTAS .

A VACUUM FULL will reclaim all expired row space, but is a very expensive operation and may take an unacceptably long time to finish on large, distributed Greenplum Database tables. If you do get into a situation where the free space map has overflowed, it may be more timely to recreate the table with a CREATE TABLE AS statement and drop the old table. A VACUUM FULL is not recommended in Greenplum Database.

 

How would you implement compression and explain possible the compression types?

There are two types of in-database compression available in the Greenplum Database for append-only tables:

1. Table-level compression is applied to an entire table.

2. Column-level compression is applied to a specific column. You can apply different column-level compression algorithms to different columns.

 

How to check distribution policy of a test table sales?

The Describe table sales shows the distribution details.

psql>\d sales
Table"public.sales"
Column | Type | Modifiers
--------+---------+-----------
id | integer |
date | date |
Distributed by: (id)

How many user schemas are there in the database? 

Use"\dn" at psql prompt.

When is my tables last analyzed in Greenplum database?

In 4.x check pg_stat_operations for all actionname performed on any object.

For example,a sales table:

gpdb=# select objname,actionname,statime from pg_stat_operations where objname like 'sales';
objname | actionname | statime
--------+-----------+-------------------------------
sales | CREATE | 2010-10-01 12:18:41.996244-07
sales | ANALYZE | 2010-10-06 14:38:21.933281-07
sales | VACUUM | 2010-10-06 14:38:33.881245-07

 

How to check the size of a table?

Table Level:

psql> select pg_size_pretty(pg_relation_size('schema.tablename'));

Replace schema.tablename with your search table.

Table and Index:

psql> select pg_size_pretty(pg_total_relation_size('schema.tablename'));

Replace schema.tablename with your search table.

 

How to check the Schema size?

Schema Level:

psql> select schemaname ,round(sum(pg_total_relation_size(schemaname||'.'||tablename))/1024/1024) "Size_MB"

from pg_tables where schemaname='SCHEMANAME' group by 1;

Replace SCHEMANAME with your schema name.

How to check the database size?

To see size of specific database:

psql> select pg_size_pretty(pg_database_size('DATBASE_NAME'));

Example: gpdb=# select pg_size_pretty(pg_database_size('gpdb'));
pg_size_pretty
----------------
24 MB
(1 row)

To see all database sizes:

psql> select datname,pg_size_pretty(pg_database_size(datname)) from pg_database;

 

Related questions

0 like 0 dislike
2 answers
0 like 0 dislike
1 answer
0 like 0 dislike
1 answer
Welcome to PostgreSQL Database Discussion Forum where you can ask questions and receive answers from other members of the community. Can discuss here Oracle, Postgresql, mariadb , mySQL , AWS , Linux , MSSQL , MongoDB , Greenplum databases related queries ...etc.
...