Elasticsearch6.5.4版本集群安装设置密码_it doesn't look like the x-pack security feature i-程序员宅基地

技术标签: Elasticsearch  BigData大数据相关  

1.ES安装配置文件

1.1主节点配置文件

# ======================== Elasticsearch Configuration =========================

# ---------------------------------- Cluster -----------------------------------
cluster.name: GzEsCluster
# ------------------------------------ Node ------------------------------------
node.name: master
node.master: true
node.data: false
node.ingest: true
#node.attr.rack: r1
# ----------------------------------- Paths ------------------------------------
#path.data: /path/to/data
path.data: /home/hadoop/es/data/data/
#path.logs: /path/to/logs
path.logs: /home/hadoop/es/data/logs/
# ----------------------------------- Memory -----------------------------------
bootstrap.memory_lock: false
bootstrap.system_call_filter: false
# ---------------------------------- Network -----------------------------------
network.host: 192.168.206.152
http.port: 9200
transport.tcp.port: 9310
# --------------------------------- Discovery ----------------------------------
#discovery.zen.ping.multicast.enabled: true
discovery.zen.ping.unicast.hosts: [ "192.168.206.152:9310", "192.168.206.152:9310", "192.168.206.152:9310" ]
discovery.zen.minimum_master_nodes: 1 
# ---------------------------------- Gateway -----------------------------------
#gateway.recover_after_nodes: 3
# ---------------------------------- Various -----------------------------------
#action.destructive_requires_name: true

### 20180913 set
cluster.routing.allocation.same_shard.host: true
#index.number_of_replicas: 3
#index.number_of_shards: 27
#临时文件保存目录
#path.work: /es01/work
#插件安装路径
#path.plugins: /es01/plugins

#内存配置
#bootstrap.mlockall: true

# 设置是否压缩tcp传输时的数据,默认为false,不压缩
transport.tcp.compress: true

# 设置插件作为启动条件,如果一下插件没有安装,则该节点服务不会启动 
#plugin.mandatory: mapper-attachments,lang-groovy

####sed 20180929
thread_pool.index.queue_size: 1000
thread_pool.bulk.queue_size: 1000
thread_pool.get.size: 32
thread_pool.write.queue_size: 1000

#### sed 20181022
#thread_pool.index.size: 32
#thread_pool.write.size: 32 
#thread_pool.bulk.size: 32
#indices.memory.index_buffer_size: 30%
#indices.memory.min_index_buffer_size: 5120mb
#http.max_content_length: 200mb

#header访问
http.cors.enabled: true
http.cors.allow-origin: "*"

#添加密码相关
xpack.security.enabled: true 
xpack.ml.enabled: true 
xpack.license.self_generated.type: trial
http.cors.allow-headers: Authorization,X-Requested-With,Content-Length,Content-Type


#index.cache.field.max_size: 10000
#Indices.store.throttle.type: merge
#Index.store.throttl.max_bytes_per_sec: 200mb
#index.translog.durability=async
#index.translog.disable_flush=true

1.2从节点配置文件

# ======================== Elasticsearch Configuration =========================

# ---------------------------------- Cluster -----------------------------------
cluster.name: GzEsCluster
# ------------------------------------ Node ------------------------------------
node.name: node-1
node.master: false
node.data: true
node.ingest: true
#node.attr.rack: r1
# ----------------------------------- Paths ------------------------------------
#path.data: /path/to/data
path.data: /home/hadoop/es/data/data/
#path.logs: /path/to/logs
path.logs: /home/hadoop/es/data/logs/
# ----------------------------------- Memory -----------------------------------
bootstrap.memory_lock: false
bootstrap.system_call_filter: false
# ---------------------------------- Network -----------------------------------
network.host: 192.168.206.153
http.port: 9200
transport.tcp.port: 9310
# --------------------------------- Discovery ----------------------------------
#discovery.zen.ping.multicast.enabled: true
discovery.zen.ping.unicast.hosts: [ "192.168.206.152:9310", "192.168.206.152:9310", "192.168.206.152:9310" ]
discovery.zen.minimum_master_nodes: 1
# ---------------------------------- Gateway -----------------------------------
#gateway.recover_after_nodes: 3
# ---------------------------------- Various -----------------------------------
#action.destructive_requires_name: true

### 20180913 set
cluster.routing.allocation.same_shard.host: true
#index.number_of_replicas: 3
#index.number_of_shards: 27
#临时文件保存目录
#path.work: /es01/work
#插件安装路径
#path.plugins: /es01/plugins

#内存配置
#bootstrap.mlockall: true

# 设置是否压缩tcp传输时的数据,默认为false,不压缩
transport.tcp.compress: true

# 设置插件作为启动条件,如果一下插件没有安装,则该节点服务不会启动 
#plugin.mandatory: mapper-attachments,lang-groovy

####sed 20180929
#thread_pool.index.queue_size: 1000
#thread_pool.bulk.queue_size: 1000
#thread_pool.get.size: 32
#thread_pool.write.queue_size: 1000

#### sed 20181022
#thread_pool.index.size: 40
#thread_pool.write.size: 40 
#thread_pool.bulk.size: 40
#indices.memory.index_buffer_size: 30%
#indices.memory.min_index_buffer_size: 5120mb
#http.max_content_length: 200mb
##index.cache.field.max_size: 10000
##Indices.store.throttle.type: merge
#Index.store.throttl.max_bytes_per_sec: 200mb
#index.translog.durability=async
#index.translog.disable_flush=true

#es加密相关
xpack.security.enabled: true
xpack.ml.enabled: true
xpack.license.self_generated.type: trial
http.cors.allow-headers: Authorization,X-Requested-With,Content-Length,Content-Type

1.3启动脚本

[hadoop@host153 elasticsearch-6.5.4]$ cat start.sh 
nohup ./bin/elasticsearch > elasticsearch.log 2>&1 &
tail -f elasticsearch.log

2.安装常见报错

2.1进程文件数限制

[2021-02-02T21:37:13,463][INFO ][o.e.b.BootstrapChecks    ] [master] bound or publishing to a non-loopback address, enforcing bootstrap checks

ERROR: [2] bootstrap checks failed

[1]: max file descriptors [4096] for elasticsearch process is too low, increase to at least [65536]

[2]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]

[2021-02-02T21:37:13,498][INFO ][o.e.n.Node               ] [master] stopping ...

[2021-02-02T21:37:13,591][INFO ][o.e.n.Node               ] [master] stopped

[2021-02-02T21:37:13,591][INFO ][o.e.n.Node               ] [master] closing ...

[2021-02-02T21:37:13,612][INFO ][o.e.n.Node               ] [master] closed

每个进程最大同时打开文件数太小,可通过下面2个命令查看当前数量

[hadoop@host152 elasticsearch-6.5.4]$ ulimit -Hn

4096

[hadoop@host152 elasticsearch-6.5.4]$ ulimit -Sn

1024

修改 /etc/security/limits.conf文件,增加配置

*               soft    nofile          65536

*               hard    nofile          65536

2.2用户进程数限制

max number of threads [3818] for user [hadoop] is too low, increase to at least [4096]

查看用户进程数

[hadoop@host152 elasticsearch-6.5.4]$ ulimit -Hu

1024

[hadoop@host152 elasticsearch-6.5.4]$ ulimit -Su

1024

修改/etc/security/limits.conf增加配置

*               soft    nproc           4096

*               hard    nproc           4096

2.3虚拟内存区域数量限制

[2021-02-02T21:59:15,424][INFO ][o.e.b.BootstrapChecks    ] [master] bound or publishing to a non-loopback address, enforcing bootstrap checks

ERROR: [1] bootstrap checks failed

[1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]

[2021-02-02T21:59:15,437][INFO ][o.e.n.Node               ] [master] stopping ...

[2021-02-02T21:59:15,477][INFO ][o.e.n.Node               ] [master] stopped

[2021-02-02T21:59:15,477][INFO ][o.e.n.Node               ] [master] closing ...

[2021-02-02T21:59:15,493][INFO ][o.e.n.Node               ] [master] closed

修改/etc/sysctl.conf文件,增加系统配置

vm.max_map_count=262144

生效系统参数

[root@host152 ~]# sysctl -p

vm.max_map_count = 262144

2.4关闭防火墙No route to host

启动节点报错

[2021-02-03T00:07:37,053][INFO ][o.e.d.z.ZenDiscovery     ] [hode-2] failed to send join request to master [{master}{1u6DUu14SwmPsTIWLOsoHQ}{jcAfT7QATMK5N8WTOAeGKg}{192.168.206.152}{192.168.206.152:9310}{ml.machine_memory=1912107008, ml.max_open_jobs=20, xpack.installed=true, ml.enabled=true}], reason [RemoteTransportException[[master][192.168.206.152:9310][internal:discovery/zen/join]]; nested: ConnectTransportException[[hode-2][192.168.206.154:9310] connect_exception]; nested: IOException[No route to host: 192.168.206.154/192.168.206.154:9310]; nested: IOException[No route to host]; ]

禁止开机启动,永久关闭防火墙

[root@host154 ~]# sudo systemctl disable firewalld.service

[root@host154 ~]# sudo systemctl stop firewalld.service

2.5访问集群信息地址

http://192.168.206.152:9200/

3.安装header

3.1安装head需要node支持

[root@host152 ~]# curl -sL https://rpm.nodesource.com/setup_8.x | bash -

[root@host152 ~]# yum install -y nodejs

查看安装node和rpm

[root@host152 ~]# node -v

v8.17.0

[root@host152 ~]# npm -v

6.13.4

3.2安装grunt

由于head 插件的执行文件是有grunt 命令来执行的,所以这个命令必须安装,切换到head目录下,npm install需要package.json

[root@host152 ~]# npm install

若长时间卡死

[ .................] / fetchMetadata: sill pacote version manifest for [email protected] fetched in 53249ms

可以修改yum源为淘宝源再重新安装

[root@host152 elasticsearch-head-master]# npm config set registry https://registry.npm.taobao.org

[root@host152 elasticsearch-head-master]# npm config get registry

https://registry.npm.taobao.org/

3.3修改Gruntfile.js,增加hostname:'*'属性

 

3.4修改app.js

 

3.5启动header访问

[hadoop@host152 elasticsearch-6.5.4]$grunt server &

访问header

http://192.168.206.152:9100/

4.安装kibana

4.1解压修改配置文件kibana.yml

4.2启动kibana

[hadoop@host152 kibana-6.5.4-linux-x86_64]$ ./bin/kibana &

访问地址

http://192.168.206.152:5601/

5.ES集群设置密码

5.1访问es的_xpack的安全项security

地址:

http://192.168.206.152:9200/_xpack

返回:

{

"build": {

"hash": "d2ef93d",

"date": "2018-12-17T21:20:45.656445Z"

},

"license": {

"uid": "92423e2b-551b-4b2f-b916-43d9dca4f124",

"type": "basic",

"mode": "basic",

"status": "active"

},

"features": {

"graph": {

"description": "Graph Data Exploration for the Elastic Stack",

"available": false,

"enabled": true

},

"logstash": {

"description": "Logstash management component for X-Pack",

"available": false,

"enabled": true

},

"ml": {

"description": "Machine Learning for the Elastic Stack",

"available": false,

"enabled": true,

"native_code_info": {

"version": "6.5.4",

"build_hash": "b616085ef32393"

}

},

"monitoring": {

"description": "Monitoring for the Elastic Stack",

"available": true,

"enabled": true

},

"rollup": {

"description": "Time series pre-aggregation and rollup",

"available": true,

"enabled": true

},

"security": {

"description": "Security for the Elastic Stack",

"available": false,

"enabled": true

},

"watcher": {

"description": "Alerting, Notification and Automation for the Elastic Stack",

"available": false,

"enabled": true

}

},

"tagline": "You know, for X"

}

5.2修改es的elasticsearch.yml配置文件

xpack.security.enabled: true 

xpack.ml.enabled: true 

xpack.license.self_generated.type: trial

http.cors.allow-headers: Authorization,X-Requested-With,Content-Length,Content-Type

5.3任意主节点执行设置密码命令

[hadoop@host154 elasticsearch-6.5.4]$ ./bin/elasticsearch-setup-passwords interactive

报错如下:

Unexpected response code [403] from calling GET http://192.168.206.152:9200/_xpack/security/_authenticate?pretty

It doesn't look like the X-Pack security feature is available on this Elasticsearch node.

Please check if you have installed a license that allows access to X-Pack Security feature.

 

ERROR: X-Pack Security is not available

原因:

       在es6.x中使用安全功能,需要在kibana中升级license。购买一个icense或申请一个30天的试用在 Kibana 中访问 Management -> Elasticsearch -> License Management。选择试用7.x版本中,es的安全核心功能免费使用es7.x版本以默认安装好x-pack修改配置激活即可

       从ES6.8&ES7.0开始,Security纳入x-pack的Basic版本中,免费使用一些基本的功能,X-Pack中的认证服务称为 Realms, Realms 有两种类型:

内置 Realms(免费)

File/Native 用户名密码保存在 ElasticSearch

外置 Realms (收费)

连接 LDAP / Active Directroy / PKI / SAML / Kerberos 进行统一认证

5.4使用kibana获取license

访问地址

http://192.168.206.152:5601/

在 Kibana 中访问 Management -> Elasticsearch -> License Management,点击右侧的升级 License 按钮,可以免费试用 30 天的高级 License

升级完成之后kibana页面会显示如下:

 

5.5获取license后重新设置密码

[hadoop@host152 elasticsearch-6.5.4]$ ./bin/elasticsearch-setup-passwords interactive

Unexpected response code [500] from calling GET http://192.168.206.152:9200/_xpack/security/_authenticate?pretty

It doesn't look like the X-Pack security feature is enabled on this Elasticsearch node.

Please check if you have enabled X-Pack security in your elasticsearch.yml configuration file.

ERROR: X-Pack Security is disabled by configuration.

原因:修改配置文件必须重启生效,重启es,再设置密码

[hadoop@host152 elasticsearch-6.5.4]$ ./bin/elasticsearch-setup-passwords interactive

Initiating the setup of passwords for reserved users elastic,apm_system,kibana,logstash_system,beats_system,remote_monitoring_user.

You will be prompted to enter passwords as the process progresses.

Please confirm that you would like to continue [y/N]y

6.设置密码后的访问

6.1集群信息访问

http://192.168.206.152:9200/

6.2header访问

6.3kibana访问

修改配置文件 kibana.yml,设置用户名密码

elasticsearch.username: "elastic"

elasticsearch.password: "123456"

重启kibana

[hadoop@host152 kibana-6.5.4-linux-x86_64]$ netstat -apn|grep 5601

(Not all processes could be identified, non-owned process info

 will not be shown, you would have to be root to see it all.)

tcp        0      0 192.168.206.152:5601    0.0.0.0:*               LISTEN      5900/./bin/../node/

[hadoop@host152 kibana-6.5.4-linux-x86_64]$ kill -9 5900

[hadoop@host152 kibana-6.5.4-linux-x86_64]$ ./bin/kibana &

启动完成后访问

http://192.168.206.152:5601/

Ok,全剧终!!!

 

 

版权声明:本文为博主原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明。
本文链接:https://blog.csdn.net/SimpleSimpleSimples/article/details/113586215

智能推荐

TEXT比較ツール-程序员宅基地

文章浏览阅读673次。txt①control②work③difflist④old_org⑤old⑥new_org⑦new MODULE1Public Const titol_max = 100Sub 旧テキスト読込()Attribute 旧テキスト読込.VB_ProcData.VB_Invoke_Func = " \n14"' ChDir _ _比較ツール

Hexo中的LaTex公式渲染问题_hexo无法渲染公式-程序员宅基地

文章浏览阅读1.5k次。本文首发于我的个人博客Suixin’s blogMarkdown中插入数学公式是常有的事,而Hexo博客框架主要是Markdown格式的文档,如果不能正常渲染Latex公式,那岂不gg……下面来看看Hexo渲染数学公式遇到的问题以及解决方案。渲染下划线的问题_在Latex公式中代表脚标,是非常常用的符号,而在Markdown中代表斜体,如果直接使用,将会产生公式无法渲染的问题,因为被H..._hexo无法渲染公式

Ubuntu14.04上安装pip的方法_pipu24-程序员宅基地

文章浏览阅读6.9k次。Ubuntu14.04上安装pip的方法在Ubuntu14.04上,建议通过下面的方法安装,这是一种通用的方法,也适用于Windows,当然在Windows下手动下载下来就行了wget https://bootstrap.pypa.io/get-pip.py --no-check-certificatesudo python get-pip.py 如果在Ub_pipu24

软件测试中的测试用例复用技术详解_什么是指对一个已经经过测试的软件包进行再次测试-程序员宅基地

文章浏览阅读1.6k次。一.引言随着软件工程领域的拓展,在软件产业飞速发展的今天,软件测试成为保证软件质量的重要手段。测试用例的选择对于软件测试的成败起着决定性作用,因此如何设计最少的测试用例实现最大的测试覆盖成为自动化测试领域中的主要研究对象。测试用例是确定一组最有可能发现错误的测试数据和流程,实现系统对某个功能的测试。而测试用例的设计与测试人员的个人经验息息相关,不同测试人员的个人经验和书写格式的差异导致了测试的盲目性,以至于产生较高的后期维护费用。测试用例的复用技术一方面是为了解决由测试人员经验不足带来的问题,同时还避免了_什么是指对一个已经经过测试的软件包进行再次测试

Ext.data.xxxStore 数据解析的简单运用_ext.data autodestroy-程序员宅基地

文章浏览阅读1.8k次。Normal07.8 磅02falsefalsefalseEN-USZH-CNX-NONEMicrosoftInternetExplorer4

老卫带你学---剑指offer刷题系列(2.替换空格)-程序员宅基地

文章浏览阅读102次。2.替换空格请实现一个函数,将一个字符串中的空格替换成“%20”。例如,当字符串为We Are Happy.则经过替换之后的字符串为We%20Are%20Happy。主要思路:第一种方法:用python字符串的replace方法。第二种方法:对空格split得到list,用‘%20’连接(join)这个list复杂度:O(N)# -*- coding:utf-8 -*-class S...

随便推点

k-core与k-shell的区别_kshell和kcore-程序员宅基地

文章浏览阅读2.3k次。一、问题描述:在文章中看到k-core与k-shell的概念,将全局图中分成2-core与1-shell的概念?结论:图可以说明一切,如图所示:1、2-core是包含蓝色与绿色的点,3-core会包含全部的点2、1-shell指的是黄色的点3、推断:任何一个图均可以分成k-core图与(k-1)-shell..._kshell和kcore

关于Vue的各个UI框架(elementUI、mint-ui、VUX)-程序员宅基地

文章浏览阅读793次。elementUI官网:http://element.eleme.io/使用步骤:1、安装完vue-cli后,再安装 element-ui命令行:npm i element-ui -D相当于 npm install element-ui --save-dev// i -> install D -> --save-dev S ..._miniui和elementui是什么关系

DP--最长公共子序列_dp最长公共子序列-程序员宅基地

文章浏览阅读513次。继续用典型问题来讨论动态规划的两个特性(重叠子问题和最优子结构)。最长公共子序列(LCS)问题描述:给定两个序列,找出在两个序列中同时出现的最长子序列的长度。一个子序列是出现在相对顺序的序列,但不一定是连续的。例如,“ABC”,“ABG”,“BDF”,“AEG”,“acefg“,..等都是”ABCDEFG“ 序列。因此,长度为n的字符串有2 ^ n个不同的可能的序列。注意最长公共子_dp最长公共子序列

特斯拉D1芯片遭实名diss:内存到封装都成问题,网友:反正不能公开测评-程序员宅基地

文章浏览阅读160次。明敏 发自 凹非寺量子位 报道 | 公众号 QbitAI在今年特斯拉AI开放日上,D1芯片风光无限。独特的晶圆封装系统+芯片设计,让D1在训练万亿参数级神经网络时,可以拥有数量级优势。特斯..._特斯拉 d1

Spark MLlib之使用Breeze操作矩阵向量_breeze.linalg.densematrix如何打印完整数据-程序员宅基地

文章浏览阅读3.2k次。Spark MLlib 使用 Breeze 操作矩阵向量_breeze.linalg.densematrix如何打印完整数据

如何做LINPACK测试及性能优化_linpack吧-程序员宅基地

文章浏览阅读2.1w次,点赞7次,收藏45次。如何做LINPACK测试及性能优化.曹振南[email protected][email protected]年8月本文主要说明如何完成HPL测试,并介绍了一些简单的性能优化方法。一、Linpack简介Linpack是国际上最流行的用于测试高性能计算机系统浮点性能的benchmark。通过对高性能计算机采用高斯消元法求解一元N次稠密线性代数方程组的_linpack吧

推荐文章

热门文章

相关标签