HBase学习之八:自定义hbase协处理器endpoint和遇到的问题_blockingrpccallback-程序员宅基地

技术标签: x  coprocessor  协处理器 hbase  

hbase rpc采用protobuf作为数据交换格式,自定义协处理器需要先创建一个protobuf作为rpc的client端和server端的数据请求和响应载体,在windows环境下需下载protobuf工具,如:
protoc-2.5.0-win32.zip地址:http://download.csdn.net/detail/javajxz008/9616971
解压至文件夹protoc-2.5.0-win32,在其中可以看到protoc.exe编译工具,在同级目录下定义自己的protobuf格式:
如pageresult.proto:
option java_package = "com.huateng.ivr.page";#包名
option java_outer_classname = "SplitPage";#类名
option java_generic_services = true;#生成服务
option java_generate_equals_and_hash = true;
option optimize_for = SPEED;
message SplitPageRequest {
    required string rowkey = 1;#请求参数,required表示必须,message相当java中的类
}


message SplitPageResponse {  #响应参数,可包含message
   message result{
     required string rowkey = 1;
     required string cf = 2;
     required string col1 = 3;
     required string col2 = 4;
    required string col3 = 5;
   }
   repeated result rs = 6;#repeated表示重复,相当于java中的list
}



service SplitPageService { #服务方法
  rpc getSplitPageResult(SplitPageRequest)
    returns (SplitPageResponse);(
}
进入刚刚解压的目录下,执行protoc --java_out=. ./pageresult.proto,在当前目录下会产生一个包含SplitPage.java类的文件夹,把该类拷贝到同包名的java工程下,紧接着开始
编写协处理器代码,我的hbase版本0.98,服务器端endpoint需实现Coprocessor, CoprocessorService类继承SplitPageService类(在产生的SplitPage类中),完整代码如下:
package com.huateng.ivr.page;

import java.io.IOException;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;

import org.apache.hadoop.hbase.Cell;
import org.apache.hadoop.hbase.Coprocessor;
import org.apache.hadoop.hbase.CoprocessorEnvironment;
import org.apache.hadoop.hbase.client.Scan;
import org.apache.hadoop.hbase.coprocessor.CoprocessorException;
import org.apache.hadoop.hbase.coprocessor.CoprocessorService;
import org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment;
import org.apache.hadoop.hbase.filter.BinaryComparator;
import org.apache.hadoop.hbase.filter.CompareFilter;
import org.apache.hadoop.hbase.filter.Filter;
import org.apache.hadoop.hbase.filter.FilterList;
import org.apache.hadoop.hbase.filter.PageFilter;
import org.apache.hadoop.hbase.filter.RowFilter;
import org.apache.hadoop.hbase.filter.SubstringComparator;
import org.apache.hadoop.hbase.protobuf.ResponseConverter;
import org.apache.hadoop.hbase.regionserver.InternalScanner;
import org.apache.hadoop.hbase.util.Bytes;

import com.google.protobuf.RpcCallback;
import com.google.protobuf.RpcController;
import com.google.protobuf.Service;
import com.huateng.ivr.page.SplitPage.SplitPageRequest;
import com.huateng.ivr.page.SplitPage.SplitPageResponse;
import com.huateng.ivr.page.SplitPage.SplitPageResponse.Builder;
import com.huateng.ivr.page.SplitPage.SplitPageResponse.result;
import com.huateng.ivr.page.SplitPage.SplitPageService;

public class PageServerCoprocessor extends SplitPageService implements
  Coprocessor, CoprocessorService {

 public static final String ROWKEY_FIRST = "00";
 public static final int PAGE_SIZE = 10000;
 public static final String DATA_DATE = "20160820";

 private RegionCoprocessorEnvironment env;

 @Override
 public Service getService() {
  // TODO Auto-generated method stub
  return this;
 }

 @Override
 public void start(CoprocessorEnvironment env) throws IOException {
  // TODO Auto-generated method stub
  if (env instanceof RegionCoprocessorEnvironment) {
   this.env = (RegionCoprocessorEnvironment) env;
  } else {
   throw new CoprocessorException("Must be loaded on a table region!");
  }

 }

 @Override
 public void stop(CoprocessorEnvironment env) throws IOException {
  // TODO Auto-generated method stub

 }

 @Override
 public void getSplitPageResult(RpcController controller,
   SplitPageRequest request, RpcCallback<SplitPageResponse> done) {
  // TODO Auto-generated method stub
  Scan scan = new Scan();
  FilterList filterList = new FilterList(
    FilterList.Operator.MUST_PASS_ALL);
  filterList.addFilter(new PageFilter(PAGE_SIZE));
  Filter rowFilter1 = new RowFilter(CompareFilter.CompareOp.EQUAL,
    new SubstringComparator(DATA_DATE));
  filterList.addFilter(rowFilter1);
  String lastrowKey = request.getRowkey();
  if (!ROWKEY_FIRST.equals(lastrowKey)) {
   Filter rowFilter2 = new RowFilter(CompareFilter.CompareOp.GREATER,
     new BinaryComparator(Bytes.toBytes(lastrowKey)));
   filterList.addFilter(rowFilter2);
  }
  scan.setFilter(filterList);
  InternalScanner scanner = null;
  SplitPageResponse response = null;
  Builder builder = SplitPageResponse.newBuilder();
  try {
   scanner = env.getRegion().getScanner(scan);
   List<Cell> results = new ArrayList<Cell>();
   boolean hasMore = false;
   do {
    hasMore = scanner.next(results);
    Map<String, String> map = getRowByCellList(results);
    String rk = map.get("rk");
    String cf = map.get("cf");
    String val1 = map.get("col1");
    String val2 = map.get("col2");
    String val3 = map.get("col3");
    results.clear();
    SplitPageResponse.result rs = result.newBuilder().setRowkey(rk).setCf(cf).setCol1(val1).setCol2(val2).setCol3(val3).build();
    builder.addRs(rs);
   } while (hasMore);
   response = builder.build();
  } catch (IOException e) {
   // TODO Auto-generated catch block
   ResponseConverter.setControllerException(controller, e);
  }finally {
            if (scanner != null) {
                try {
                    scanner.close();
                } catch (IOException ignored) {}
            }
        }
  done.run(response);
 }

 private Map<String, String> getRowByCellList(List<Cell> results) {
  if (results == null) {
   return null;
  }
  Map<String, String> cellMap = new HashMap<String, String>();
  for (Cell cell : results) {
   String rowkey = Bytes.toString(cell.getRowArray(),
     cell.getRowOffset(), cell.getRowLength());
   String cf = Bytes.toString(cell.getFamilyArray(),
     cell.getFamilyOffset(), cell.getFamilyLength());
   String qf = Bytes.toString(cell.getQualifierArray(),
     cell.getQualifierOffset(), cell.getQualifierLength());
   String value = Bytes.toString(cell.getValueArray(),
     cell.getValueOffset(), cell.getValueLength());
   cellMap.put("rk", rowkey);
   cellMap.put("cf", cf);
   cellMap.put(qf, value);
  }
  return cellMap;
 }
}
将PageServerCoprocessor类和SplitPage类打成jar包上传到hdfs路径下,并将协处理器加到表上,代码如下:
private static void addPageCoprocessor() throws MasterNotRunningException, ZooKeeperConnectionException, IOException{
      HBaseAdmin admin = new HBaseAdmin(conf); 
        admin.disableTable(tableName);
        HTableDescriptor htd = admin.getTableDescriptor(Bytes.toBytes(tableName));
        HColumnDescriptor columnFamily1 = new HColumnDescriptor("info");
        columnFamily1.setMaxVersions(3);
        columnFamily1.setMinVersions(1);
        htd.addFamily(columnFamily1);
        htd.addCoprocessor(PageServerCoprocessor.class.getCanonicalName(), new Path("hdfs://172.30.115.58:8020/apps/hive/warehouse/coprocessor/pagecoprocessor.jar"),
          Coprocessor.PRIORITY_USER, null);
        admin.modifyTable(tableName, htd); 
        admin.enableTable(tableName); 
        admin.close(); 
}
可以在hbase shell下describe表看是否成功,这里有一个问题,如果协处理器加不成功会造成regionserver挂掉,从而影响hbase的使用,所以要确保加载成功,如果想协处理器加载失败不影响
hbase的正常使用,则在hbase-site.xml中可加入参数hbase.coprocessor.abortοnerrοr=false。接下来编写客户端代码:
核心代码如下:
public static String getPageByConditions(String tableName,String rowkey) throws Exception{
  HConnection conn = HConnectionManager.createConnection(conf);
  HTable hTable = (HTable) conn.getTable(Bytes.toBytes(tableName));
  final SplitPageRequest request = SplitPageRequest.newBuilder().setRowkey(rowkey).build();
  try {
    Map<byte[], List<result>> res = hTable.coprocessorService(SplitPageService.class, null, null, new Batch.Call<SplitPageService, List<SplitPageResponse.result>>() {

    @Override
    public List<result> call(SplitPageService service)
      throws IOException {
     // TODO Auto-generated method stub
     BlockingRpcCallback rpcCallback = new BlockingRpcCallback();
     service.getSplitPageResult(null, request, rpcCallback);
     SplitPageResponse reponse = (SplitPageResponse) rpcCallback.get();
     return reponse.getRsList();
    }
   });
    Set<Entry<byte[], List<result>>> set = res.entrySet();
    Iterator<Entry<byte[], List<result>>> it = set.iterator();
    Map<String,String> map = new HashMap<String, String>();
    while(it.hasNext()){
     Entry<byte[], List<result>> entry = it.next();
     List<result> list = entry.getValue();
     for(result r:list){
     System.out.println("rowkey:"+r.getRowkey()+",cf:"+r.getCf()+",col1:"+r.getCol1()+",col2:"+r.getCol2()+",col3:"+r.getCol3());
     }
    }
    return map.get("rk");
  } catch (Throwable e) {
   // TODO Auto-generated catch block
   e.printStackTrace();
  }
  return "";
 }
如果遇到诸如no registered service on table(记不清了)...,其实是在前一步加载协处理器没有成功,仔细检查重新load,hbase官网关于协处理器给出了很详细的阐述和例子:
http://hbase.apache.org/book.html#cp_loading,侧边栏目录:Apache HBase Coprocessors
hbase rpc采用protobuf作为数据交换格式,自定义协处理器需要先创建一个protobuf作为rpc的client端和server端的数据请求和响应载体,在windows环境下需下载protobuf工具,如:
protoc-2.5.0-win32.zip地址:http://download.csdn.net/detail/javajxz008/9616971
解压至文件夹protoc-2.5.0-win32,在其中可以看到protoc.exe编译工具,在同级目录下定义自己的protobuf格式:
如pageresult.proto:
option java_package = "com.huateng.ivr.page";#包名
option java_outer_classname = "SplitPage";#类名
option java_generic_services = true;#生成服务
option java_generate_equals_and_hash = true;
option optimize_for = SPEED;
message SplitPageRequest {
    required string rowkey = 1;#请求参数,required表示必须,message相当java中的类
}


message SplitPageResponse {  #响应参数,可包含message
   message result{
     required string rowkey = 1;
     required string cf = 2;
     required string col1 = 3;
     required string col2 = 4;
    required string col3 = 5;
   }
   repeated result rs = 6;#repeated表示重复,相当于java中的list
}



service SplitPageService { #服务方法
  rpc getSplitPageResult(SplitPageRequest)
    returns (SplitPageResponse);(
}
进入刚刚解压的目录下,执行protoc --java_out=. ./pageresult.proto,在当前目录下会产生一个包含SplitPage.java类的文件夹,把该类拷贝到同包名的java工程下,紧接着开始
编写协处理器代码,我的hbase版本0.98,服务器端endpoint需实现Coprocessor, CoprocessorService类继承SplitPageService类(在产生的SplitPage类中),完整代码如下:
package com.huateng.ivr.page;

import java.io.IOException;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;

import org.apache.hadoop.hbase.Cell;
import org.apache.hadoop.hbase.Coprocessor;
import org.apache.hadoop.hbase.CoprocessorEnvironment;
import org.apache.hadoop.hbase.client.Scan;
import org.apache.hadoop.hbase.coprocessor.CoprocessorException;
import org.apache.hadoop.hbase.coprocessor.CoprocessorService;
import org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment;
import org.apache.hadoop.hbase.filter.BinaryComparator;
import org.apache.hadoop.hbase.filter.CompareFilter;
import org.apache.hadoop.hbase.filter.Filter;
import org.apache.hadoop.hbase.filter.FilterList;
import org.apache.hadoop.hbase.filter.PageFilter;
import org.apache.hadoop.hbase.filter.RowFilter;
import org.apache.hadoop.hbase.filter.SubstringComparator;
import org.apache.hadoop.hbase.protobuf.ResponseConverter;
import org.apache.hadoop.hbase.regionserver.InternalScanner;
import org.apache.hadoop.hbase.util.Bytes;

import com.google.protobuf.RpcCallback;
import com.google.protobuf.RpcController;
import com.google.protobuf.Service;
import com.huateng.ivr.page.SplitPage.SplitPageRequest;
import com.huateng.ivr.page.SplitPage.SplitPageResponse;
import com.huateng.ivr.page.SplitPage.SplitPageResponse.Builder;
import com.huateng.ivr.page.SplitPage.SplitPageResponse.result;
import com.huateng.ivr.page.SplitPage.SplitPageService;

public class PageServerCoprocessor extends SplitPageService implements
  Coprocessor, CoprocessorService {

 public static final String ROWKEY_FIRST = "00";
 public static final int PAGE_SIZE = 10000;
 public static final String DATA_DATE = "20160820";

 private RegionCoprocessorEnvironment env;

 @Override
 public Service getService() {
  // TODO Auto-generated method stub
  return this;
 }

 @Override
 public void start(CoprocessorEnvironment env) throws IOException {
  // TODO Auto-generated method stub
  if (env instanceof RegionCoprocessorEnvironment) {
   this.env = (RegionCoprocessorEnvironment) env;
  } else {
   throw new CoprocessorException("Must be loaded on a table region!");
  }

 }

 @Override
 public void stop(CoprocessorEnvironment env) throws IOException {
  // TODO Auto-generated method stub

 }

 @Override
 public void getSplitPageResult(RpcController controller,
   SplitPageRequest request, RpcCallback<SplitPageResponse> done) {
  // TODO Auto-generated method stub
  Scan scan = new Scan();
  FilterList filterList = new FilterList(
    FilterList.Operator.MUST_PASS_ALL);
  filterList.addFilter(new PageFilter(PAGE_SIZE));
  Filter rowFilter1 = new RowFilter(CompareFilter.CompareOp.EQUAL,
    new SubstringComparator(DATA_DATE));
  filterList.addFilter(rowFilter1);
  String lastrowKey = request.getRowkey();
  if (!ROWKEY_FIRST.equals(lastrowKey)) {
   Filter rowFilter2 = new RowFilter(CompareFilter.CompareOp.GREATER,
     new BinaryComparator(Bytes.toBytes(lastrowKey)));
   filterList.addFilter(rowFilter2);
  }
  scan.setFilter(filterList);
  InternalScanner scanner = null;
  SplitPageResponse response = null;
  Builder builder = SplitPageResponse.newBuilder();
  try {
   scanner = env.getRegion().getScanner(scan);
   List<Cell> results = new ArrayList<Cell>();
   boolean hasMore = false;
   do {
    hasMore = scanner.next(results);
    Map<String, String> map = getRowByCellList(results);
    String rk = map.get("rk");
    String cf = map.get("cf");
    String val1 = map.get("col1");
    String val2 = map.get("col2");
    String val3 = map.get("col3");
    results.clear();
    SplitPageResponse.result rs = result.newBuilder().setRowkey(rk).setCf(cf).setCol1(val1).setCol2(val2).setCol3(val3).build();
    builder.addRs(rs);
   } while (hasMore);
   response = builder.build();
  } catch (IOException e) {
   // TODO Auto-generated catch block
   ResponseConverter.setControllerException(controller, e);
  }finally {
            if (scanner != null) {
                try {
                    scanner.close();
                } catch (IOException ignored) {}
            }
        }
  done.run(response);
 }

 private Map<String, String> getRowByCellList(List<Cell> results) {
  if (results == null) {
   return null;
  }
  Map<String, String> cellMap = new HashMap<String, String>();
  for (Cell cell : results) {
   String rowkey = Bytes.toString(cell.getRowArray(),
     cell.getRowOffset(), cell.getRowLength());
   String cf = Bytes.toString(cell.getFamilyArray(),
     cell.getFamilyOffset(), cell.getFamilyLength());
   String qf = Bytes.toString(cell.getQualifierArray(),
     cell.getQualifierOffset(), cell.getQualifierLength());
   String value = Bytes.toString(cell.getValueArray(),
     cell.getValueOffset(), cell.getValueLength());
   cellMap.put("rk", rowkey);
   cellMap.put("cf", cf);
   cellMap.put(qf, value);
  }
  return cellMap;
 }
}
将PageServerCoprocessor类和SplitPage类打成jar包上传到hdfs路径下,并将协处理器加到表上,代码如下:
private static void addPageCoprocessor() throws MasterNotRunningException, ZooKeeperConnectionException, IOException{
      HBaseAdmin admin = new HBaseAdmin(conf); 
        admin.disableTable(tableName);
        HTableDescriptor htd = admin.getTableDescriptor(Bytes.toBytes(tableName));
        HColumnDescriptor columnFamily1 = new HColumnDescriptor("info");
        columnFamily1.setMaxVersions(3);
        columnFamily1.setMinVersions(1);
        htd.addFamily(columnFamily1);
        htd.addCoprocessor(PageServerCoprocessor.class.getCanonicalName(), new Path("hdfs://172.30.115.58:8020/apps/hive/warehouse/coprocessor/pagecoprocessor.jar"),
          Coprocessor.PRIORITY_USER, null);
        admin.modifyTable(tableName, htd); 
        admin.enableTable(tableName); 
        admin.close(); 
}
可以在hbase shell下describe表看是否成功,这里有一个问题,如果协处理器加不成功会造成regionserver挂掉,从而影响hbase的使用,所以要确保加载成功,如果想协处理器加载失败不影响
hbase的正常使用,则在hbase-site.xml中可加入参数hbase.coprocessor.abortοnerrοr=false。接下来编写客户端代码:
核心代码如下:
public static String getPageByConditions(String tableName,String rowkey) throws Exception{
  HConnection conn = HConnectionManager.createConnection(conf);
  HTable hTable = (HTable) conn.getTable(Bytes.toBytes(tableName));
  final SplitPageRequest request = SplitPageRequest.newBuilder().setRowkey(rowkey).build();
  try {
    Map<byte[], List<result>> res = hTable.coprocessorService(SplitPageService.class, null, null, new Batch.Call<SplitPageService, List<SplitPageResponse.result>>() {

    @Override
    public List<result> call(SplitPageService service)
      throws IOException {
     // TODO Auto-generated method stub
     BlockingRpcCallback rpcCallback = new BlockingRpcCallback();
     service.getSplitPageResult(null, request, rpcCallback);
     SplitPageResponse reponse = (SplitPageResponse) rpcCallback.get();
     return reponse.getRsList();
    }
   });
    Set<Entry<byte[], List<result>>> set = res.entrySet();
    Iterator<Entry<byte[], List<result>>> it = set.iterator();
    Map<String,String> map = new HashMap<String, String>();
    while(it.hasNext()){
     Entry<byte[], List<result>> entry = it.next();
     List<result> list = entry.getValue();
     for(result r:list){
     System.out.println("rowkey:"+r.getRowkey()+",cf:"+r.getCf()+",col1:"+r.getCol1()+",col2:"+r.getCol2()+",col3:"+r.getCol3());
     }
    }
    return map.get("rk");
  } catch (Throwable e) {
   // TODO Auto-generated catch block
   e.printStackTrace();
  }
  return "";
 }
如果遇到诸如no registered service on table(记不清了)...,其实是在前一步加载协处理器没有成功,仔细检查重新load,hbase官网关于协处理器给出了很详细的阐述和例子:
http://hbase.apache.org/book.html#cp_loading,侧边栏目录:Apache HBase Coprocessors
版权声明:本文为博主原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明。
本文链接:https://blog.csdn.net/javajxz008/article/details/52372999

智能推荐

HTML5 Web SQL 数据库_方式准则的定义-程序员宅基地

文章浏览阅读1k次。1、HTML5 Web SQL 数据库 Web SQL 数据库 API 并不是 HTML5 规范的一部分,但是它是一个独立的规范,引入了一组使用 SQL 操作客户端数据库的 APIs。如果你是一个 Web 后端程序员,应该很容易理解 SQL 的操作。Web SQL 数据库可以在最新版的 Safari, Chrome 和 Opera 浏览器中工作。2、核心方法 以下是规范中定义的三个_方式准则的定义

spring Boot 中使用线程池异步执行多个定时任务_springboot启动后自动开启多个线程程序-程序员宅基地

文章浏览阅读4.1k次,点赞2次,收藏6次。spring Boot 中使用线程池异步执行多个定时任务在启动类中添加注解@EnableScheduling配置自定义线程池在启动类中添加注解@EnableScheduling第一步添加注解,这样才会使定时任务启动配置自定义线程池@Configurationpublic class ScheduleConfiguration implements SchedulingConfigurer..._springboot启动后自动开启多个线程程序

Maven编译打包项目 mvn clean install报错ERROR_mvn clean install有errors-程序员宅基地

文章浏览阅读1.1k次。在项目的target文件夹下把之前"mvn clean package"生成的压缩包(我的是jar包)删掉重新执行"mvn clean package"再执行"mvn clean install"即可_mvn clean install有errors

navacate连接不上mysql_navicat连接mysql失败怎么办-程序员宅基地

文章浏览阅读974次。Navicat连接mysql数据库时,不断报1405错误,下面是针对这个的解决办法:MySQL服务器正在运行,停止它。如果是作为Windows服务运行的服务器,进入计算机管理--->服务和应用程序------>服务。如果服务器不是作为服务而运行的,可能需要使用任务管理器来强制停止它。创建1个文本文件(此处命名为mysql-init.txt),并将下述命令置于单一行中:SET PASSW..._nvarchar链接不上数据库

Python的requests参数及方法_python requests 参数-程序员宅基地

文章浏览阅读2.2k次。Python的requests模块是一个常用的HTTP库,用于发送HTTP请求和处理响应。_python requests 参数

近5年典型的的APT攻击事件_2010谷歌网络被极光黑客攻击-程序员宅基地

文章浏览阅读2.7w次,点赞7次,收藏50次。APT攻击APT攻击是近几年来出现的一种高级攻击,具有难检测、持续时间长和攻击目标明确等特征。本文中,整理了近年来比较典型的几个APT攻击,并其攻击过程做了分析(为了加深自己对APT攻击的理解和学习)Google极光攻击2010年的Google Aurora(极光)攻击是一个十分著名的APT攻击。Google的一名雇员点击即时消息中的一条恶意链接,引发了一系列事件导致这个搜_2010谷歌网络被极光黑客攻击

随便推点

微信小程序api视频课程-定时器-setTimeout的使用_微信小程序 settimeout 向上层传值-程序员宅基地

文章浏览阅读1.1k次。JS代码 /** * 生命周期函数--监听页面加载 */ onLoad: function (options) { setTimeout( function(){ wx.showToast({ title: '黄菊华老师', }) },2000 ) },说明该代码只执行一次..._微信小程序 settimeout 向上层传值

uploadify2.1.4如何能使按钮显示中文-程序员宅基地

文章浏览阅读48次。uploadify2.1.4如何能使按钮显示中文博客分类:uploadify网上关于这段话的搜索恐怕是太多了。方法多也试过了不知怎么,反正不行。最终自己想办法给解决了。当然首先还是要有fla源码。直接去管网就可以下载。[url]http://www.uploadify.com/wp-content/uploads/uploadify-v2.1.4...

戴尔服务器安装VMware ESXI6.7.0教程(U盘安装)_vmware-vcsa-all-6.7.0-8169922.iso-程序员宅基地

文章浏览阅读9.6k次,点赞5次,收藏36次。戴尔服务器安装VMware ESXI6.7.0教程(U盘安装)一、前期准备1、下载镜像下载esxi6.7镜像:VMware-VMvisor-Installer-6.7.0-8169922.x86_64.iso这里推荐到戴尔官网下载,Baidu搜索“戴尔驱动下载”,选择进入官网,根据提示输入服务器型号搜索适用于该型号服务器的所有驱动下一步选择具体类型的驱动选择一项下载即可待下载完成后打开软碟通(UItraISO),在“文件”选项中打开刚才下载好的镜像文件然后选择启动_vmware-vcsa-all-6.7.0-8169922.iso

百度语音技术永久免费的语音自动转字幕介绍 -程序员宅基地

文章浏览阅读2k次。百度语音技术永久免费的语音自动转字幕介绍基于百度语音技术,识别率97%无时长限制,无文件大小限制永久免费,简单,易用,速度快支持中文,英文,粤语永久免费的语音转字幕网站: http://thinktothings.com视频介绍 https://www.bilibili.com/video/av42750807 ...

Dyninst学习笔记-程序员宅基地

文章浏览阅读7.6k次,点赞2次,收藏9次。Instrumentation是一种直接修改程序二进制文件的方法。其可以用于程序的调试,优化,安全等等。对这个词一般的翻译是“插桩”,但这更多使用于软件测试领域。【找一些相关的例子】Dyninst可以动态或静态的修改程序的二进制代码。动态修改是在目标进程运行时插入代码(dynamic binary instrumentation)。静态修改则是直接向二进制文件插入代码(static b_dyninst

在服务器上部署asp网站,部署asp网站到云服务器-程序员宅基地

文章浏览阅读2.9k次。部署asp网站到云服务器 内容精选换一换通常情况下,需要结合客户的实际业务环境和具体需求进行业务改造评估,建议您进行服务咨询。这里仅描述一些通用的策略供您参考,主要分如下几方面进行考虑:业务迁移不管您的业务是否已经上线华为云,业务迁移的策略是一致的。建议您将时延敏感型,有快速批量就近部署需求的业务迁移至IEC;保留数据量大,且需要长期稳定运行的业务在中心云上。迁移方法请参见如何计算隔离独享计算资源..._nas asp网站