All Downloads are FREE. Search and download functionalities are using the official Maven repository.

hbase-webapps.master.hbck.jsp Maven / Gradle / Ivy

There is a newer version: 3.0.0-beta-1
Show newest version
<%--
/**
 * Licensed to the Apache Software Foundation (ASF) under one
 * or more contributor license agreements.  See the NOTICE file
 * distributed with this work for additional information
 * regarding copyright ownership.  The ASF licenses this file
 * to you under the Apache License, Version 2.0 (the
 * "License"); you may not use this file except in compliance
 * with the License.  You may obtain a copy of the License at
 *
 *     http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */
--%>
<%@ page contentType="text/html;charset=UTF-8"
         import="java.time.Instant"
         import="java.time.ZoneId"
         import="java.util.Date"
         import="java.util.List"
         import="java.util.Map"
         import="java.util.stream.Collectors"
         import="java.time.ZonedDateTime"
         import="java.time.format.DateTimeFormatter"
%>
<%@ page import="org.apache.hadoop.fs.Path" %>
<%@ page import="org.apache.hadoop.hbase.client.RegionInfo" %>
<%@ page import="org.apache.hadoop.hbase.master.HbckChore" %>
<%@ page import="org.apache.hadoop.hbase.master.HMaster" %>
<%@ page import="org.apache.hadoop.hbase.master.ServerManager" %>
<%@ page import="org.apache.hadoop.hbase.ServerName" %>
<%@ page import="org.apache.hadoop.hbase.util.Bytes" %>
<%@ page import="org.apache.hadoop.hbase.util.Pair" %>
<%@ page import="org.apache.hadoop.hbase.master.janitor.CatalogJanitor" %>
<%@ page import="org.apache.hadoop.hbase.master.janitor.Report" %>
<%
  final String cacheParameterValue = request.getParameter("cache");
  final HMaster master = (HMaster) getServletContext().getAttribute(HMaster.MASTER);
  pageContext.setAttribute("pageTitle", "HBase Master HBCK Report: " + master.getServerName());
  if (!Boolean.parseBoolean(cacheParameterValue)) {
    // Run the two reporters inline w/ drawing of the page. If exception, will show in page draw.
    try {
      master.getMasterRpcServices().runHbckChore(null, null);
    } catch (org.apache.hbase.thirdparty.com.google.protobuf.ServiceException se) {
      out.write("Failed generating a new hbck_chore report; using cache; try again or run hbck_chore_run in the shell: " + se.getMessage() + "\n");
    } 
    try {
      master.getMasterRpcServices().runCatalogScan(null, null);
    } catch (org.apache.hbase.thirdparty.com.google.protobuf.ServiceException se) {
      out.write("Failed generating a new catalogjanitor report; using cache; try again or run catalogjanitor_run in the shell: " + se.getMessage() + "\n");
    } 
  }
  HbckChore hbckChore = master.getHbckChore();
  Map>> inconsistentRegions = null;
  Map orphanRegionsOnRS = null;
  Map orphanRegionsOnFS = null;
  long startTimestamp = 0;
  long endTimestamp = 0;
  if (hbckChore != null) {
    inconsistentRegions = hbckChore.getInconsistentRegions();
    orphanRegionsOnRS = hbckChore.getOrphanRegionsOnRS();
    orphanRegionsOnFS = hbckChore.getOrphanRegionsOnFS();
    startTimestamp = hbckChore.getCheckingStartTimestamp();
    endTimestamp = hbckChore.getCheckingEndTimestamp();
  }
  ZonedDateTime zdt = ZonedDateTime.ofInstant(Instant.ofEpochMilli(startTimestamp),
    ZoneId.systemDefault());
  String iso8601start = startTimestamp == 0? "-1": zdt.format(DateTimeFormatter.ISO_OFFSET_DATE_TIME);
  zdt = ZonedDateTime.ofInstant(Instant.ofEpochMilli(endTimestamp),
    ZoneId.systemDefault());
  String iso8601end = startTimestamp == 0? "-1": zdt.format(DateTimeFormatter.ISO_OFFSET_DATE_TIME);
  CatalogJanitor cj = master.getCatalogJanitor();
  Report report = cj == null? null: cj.getLastReport();
  final ServerManager serverManager = master.getServerManager();
%>

  


<% if (!master.isInitialized()) { %>
<% } else { %>
<% if (inconsistentRegions != null && inconsistentRegions.size() > 0) { %>

There are three cases: 1. Master thought this region opened, but no regionserver reported it (Fix: use assign command); 2. Master thought this region opened on Server1, but regionserver reported Server2 (Fix: need to check the server still exists. If not, schedule ServerCrashProcedure for it. If exists, restart Server2 and Server1): 3. More than one regionserver reports opened this region (Fix: restart the RegionServers). Note: the reported online regionservers may be not be up-to-date when there are regions in transition.

<% for (Map.Entry>> entry : inconsistentRegions.entrySet()) {%> <% } %>

<%= inconsistentRegions.size() %> region(s) in set.

Region Name Location in META Reported Online RegionServers
<%= entry.getKey() %> <%= formatServerName(master, serverManager, entry.getValue().getFirst()) %> <%= entry.getValue().getSecond().stream().map(s -> formatServerName(master, serverManager, s)). collect(Collectors.joining(", ")) %>
<% } %> <% if (orphanRegionsOnRS != null && orphanRegionsOnRS.size() > 0) { %>
<% for (Map.Entry entry : orphanRegionsOnRS.entrySet()) { %> <% } %>

<%= orphanRegionsOnRS.size() %> region(s) in set.

Region Name Reported Online RegionServer
<%= entry.getKey() %> <%= formatServerName(master, serverManager, entry.getValue()) %>
<% } %> <% if (orphanRegionsOnFS != null && orphanRegionsOnFS.size() > 0) { %>

The below are Regions we've lost account of. To be safe, run bulk load of any data found under these Region orphan directories to have the cluster re-adopt data. First make sure hbase:meta is in a healthy state, that there are no holes, overlaps or inconsistencies (else bulk load may fail); run hbck2 fixMeta. Once this is done, per Region below, run a bulk load -- $ hbase completebulkload REGION_DIR_PATH TABLE_NAME -- and then delete the desiccated directory content (HFiles are removed upon successful load; all that is left are empty directories and occasionally a seqid marking file).

<% for (Map.Entry entry : orphanRegionsOnFS.entrySet()) { %> <% } %>

<%= orphanRegionsOnFS.size() %> region(s) in set.

Region Encoded Name FileSystem Path
<%= entry.getKey() %> <%= entry.getValue() %>
<% } %> <% zdt = ZonedDateTime.ofInstant(Instant.ofEpochMilli(System.currentTimeMillis()), ZoneId.systemDefault()); String iso8601Now = zdt.format(DateTimeFormatter.ISO_OFFSET_DATE_TIME); String iso8601reportTime = "-1"; if (report != null) { zdt = ZonedDateTime.ofInstant(Instant.ofEpochMilli(report.getCreateTime()), ZoneId.systemDefault()); iso8601reportTime = zdt.format(DateTimeFormatter.ISO_OFFSET_DATE_TIME); } %>
<% if (report != null && !report.isEmpty()) { %> <% if (!report.getHoles().isEmpty()) { %>
<% for (Pair p : report.getHoles()) { %> <% } %>

<%= report.getHoles().size() %> hole(s).

RegionInfo RegionInfo
<%= p.getFirst().getRegionNameAsString() %> <%= p.getSecond().getRegionNameAsString() %>
<% } %> <% if (!report.getOverlaps().isEmpty()) { %>
<% for (Pair p : report.getOverlaps()) { %> <% if (report.getMergedRegions().containsKey(p.getFirst())) { %> <% } else { %> <% } %> <% if (report.getMergedRegions().containsKey(p.getSecond())) { %> <% } else { %> <% } %> <% } %>

<%= report.getOverlaps().size() %> overlap(s).

RegionInfo Other RegionInfo
<%= p.getFirst().getRegionNameAsString() %><%= p.getFirst().getRegionNameAsString() %><%= p.getSecond().getRegionNameAsString() %><%= p.getSecond().getRegionNameAsString() %>
<% } %> <% if (!report.getUnknownServers().isEmpty()) { %>

The below are servers mentioned in the hbase:meta table that are no longer 'live' or known 'dead'. The server likely belongs to an older cluster epoch since replaced by a new instance because of a restart/crash. To clear 'Unknown Servers', run 'hbck2 scheduleRecoveries UNKNOWN_SERVERNAME'. This will schedule a ServerCrashProcedure. It will clear out 'Unknown Server' references and schedule reassigns of any Regions that were associated with this host. But first!, be sure the referenced Region is not currently stuck looping trying to OPEN. Does it show as a Region-In-Transition on the Master home page? Is it mentioned in the 'Procedures and Locks' Procedures list? If so, perhaps it stuck in a loop trying to OPEN but unable to because of a missing reference or file. Read the Master log looking for the most recent mentions of the associated Region name. Try and address any such complaint first. If successful, a side-effect should be the clean up of the 'Unknown Servers' list. It may take a while. OPENs are retried forever but the interval between retries grows. The 'Unknown Server' may be cleared because it is just the last RegionServer the Region was successfully opened on; on the next open, the 'Unknown Server' will be purged.

<% for (Pair p: report.getUnknownServers()) { %> <% } %>

<%= report.getUnknownServers().size() %> unknown servers(s).

RegionInfo ServerName
<%= p.getFirst().getRegionNameAsString() %> <%= p.getSecond() %>
<% } %> <% if (!report.getEmptyRegionInfo().isEmpty()) { %>
<% for (byte [] row: report.getEmptyRegionInfo()) { %> <% } %>

<%= report.getEmptyRegionInfo().size() %> emptyRegionInfo(s).

Row
<%= Bytes.toStringBinary(row) %>
<% } %> <% } %> <% } %>
<%! /** * Format serverName for display. * If a live server reference, make it a link. * If dead, make it italic. * If unknown, make it plain. */ private static String formatServerName(HMaster master, ServerManager serverManager, ServerName serverName) { String sn = serverName.toString(); if (serverManager.isServerOnline(serverName)) { int infoPort = master.getRegionServerInfoPort(serverName); if (infoPort > 0) { return "" + sn + ""; } else { return "" + sn + ""; } } else if (serverManager.isServerDead(serverName)) { return "" + sn + ""; } return sn; } %>




© 2015 - 2024 Weber Informatics LLC | Privacy Policy