vgcreate hadoopvg /dev/sdc
lvcreate -L 60G -n hadooplv hadoopvg
lvcreate -l 100%FREE -n reply hadoopvg
mount /dev/hadoopvg/hadooplv /hadoop
mount /dev/hadoopvg/repolv /repo
1. On Ambari Server “/etc/amabri-server/conf/ambari.properties” file add the following protocols:
then restart ambari-server by: systemctl restart ambari-server
2. Add the following option to security section in “/etc/amabri-agent/conf/ambari-agent.ini” in all the hosts in the cluster
and restart ambari-agent by: systemctl restart ambari-agent
If you installed HDP 2.5 or later, you must manually install the Berkeley DB file required for Falcon. Below are the instructions for Ambari installations. See if this helps.
1. Download the required Berkeley DB implementation file.
wget –O je-5.0.73.jar http://search.maven.org/remotecontent?filepath=com/sleepycat/je/5.0.73/je-5.0.73.jar
2. Log in to the Ambari server with administrator privileges.
su – root
3. Copy the file to the Ambari server share folder.
cp je-5.0.73.jar /usr/share/
4. Set permissions on the file to owner=read/write, group=read, other=read.
chmod 644 /usr/share/je-5.0.73.jar
5. Configure the Ambari server to use the Berkeley DB driver.
ambari-server setup –jdbc-db=bdb –jdbc-driver=/usr/share/je-5.0.73.jar
6. Restart the Ambari server.
7. Restart the Falcon service from the Ambari UI.
You need to have administrator privileges in Ambari to restart a service.
a) In the Ambari web UI, click the Services tab and select the Falcon service in the left Services pane.
b) From the Falcon Summary page, click Service Actions > Restart All.
c) Click Confirm Restart All.
When the service is available, the Falcon status displays as Started on the Summary page.
Synopsis: Crashes due to failure to allocate large pages.
On Linux, failures when allocating large pages can lead to crashes. When running JDK 7u51 or later versions, the issue can be recognized in two ways:
- Before the crash happens one or more lines similar to this will have been printed to the log:
os::commit_memory(0x00000006b1600000, 352321536, 2097152, 0) failed;
error='Cannot allocate memory' (errno=12); Cannot allocate large pages, falling back to regular pages
- If a hs_err file is generated it will contain a line similar to this:
Large page allocation failures have occurred 3 times
The problem can be avoided by running with large page support turned off, for example by passing the
"-XX:-UseLargePages" option to the java binary.
If you cannot set this on your command line, you can set this value to JAVA_TOOL_OPTIONS environment variable: export JAVA_TOOL_OPTIONS=”-XX:-UseLargePages”