Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

hdfs-dptst-example.cfg配置文件相关参数问题咨询 #27

Open
xiaoguanyu opened this issue May 15, 2014 · 11 comments
Open

hdfs-dptst-example.cfg配置文件相关参数问题咨询 #27

xiaoguanyu opened this issue May 15, 2014 · 11 comments

Comments

@xiaoguanyu
Copy link

文件目录:monos/config/conf/hdfs/hdfs-dptst-example.cfg
[journalnode]
base_port=12100
host.0=10.38.11.59
host.1=10.38.11.134
host.2=10.38.11.135
[namenode]
base_port=12200
host.0=10.38.11.59
host.1=10.38.11.134
[zkfc]
base_port=12300
[datanode]
base_port=12400
host.0=10.38.11.134
host.1=10.38.11.135
请问这些参数中base_port的设置是同hadoop的配置文件中的journalnode、namenode等的rpc端口号相同,还是自己随意定义的

@YxAc
Copy link
Member

YxAc commented May 15, 2014

会根据base_port的设置生成hadoop的配置文件,rpc端口对应着base_port

2014-05-15 9:07 GMT+08:00 xiaoguanyu [email protected]:

文件目录:monos/config/conf/hdfs/hdfs-dptst-example.cfg
[journalnode]
base_port=12100
host.0=10.38.11.59
host.1=10.38.11.134
host.2=10.38.11.135
[namenode]
base_port=12200
host.0=10.38.11.59
host.1=10.38.11.134
[zkfc]
base_port=12300
[datanode]
base_port=12400
host.0=10.38.11.134
host.1=10.38.11.135
请问这些参数中base_port的设置是同hadoop的配置文件中的journalnode、namenode等的rpc端口号相同,还是自己随意定义的


Reply to this email directly or view it on GitHubhttps://github.com//issues/27
.

@xiaoguanyu
Copy link
Author

但是我已经在集群中部署了hadoop2,如果根据base_port的设置生成的hadoop配置文件是hadoop2/etc/hadoop/hdfs-site.xml文件吗?

[email protected]

发件人: 勇幸
发送时间: 2014-05-15 09:35
收件人: XiaoMi/minos
抄送: xiaoguanyu
主题: Re: [minos] hdfs-dptst-example.cfg配置文件相关参数问题咨询 (#27)
会根据base_port的设置生成hadoop的配置文件,rpc端口对应着base_port

2014-05-15 9:07 GMT+08:00 xiaoguanyu [email protected]:

文件目录:monos/config/conf/hdfs/hdfs-dptst-example.cfg
[journalnode]
base_port=12100
host.0=10.38.11.59
host.1=10.38.11.134
host.2=10.38.11.135
[namenode]
base_port=12200
host.0=10.38.11.59
host.1=10.38.11.134
[zkfc]
base_port=12300
[datanode]
base_port=12400
host.0=10.38.11.134
host.1=10.38.11.135
请问这些参数中base_port的设置是同hadoop的配置文件中的journalnode、namenode等的rpc端口号相同,还是自己随意定义的


Reply to this email directly or view it on GitHubhttps://github.com//issues/27
.


Reply to this email directly or view it on GitHub.

@xiaoguanyu
Copy link
Author

minos的client是什么原理只要执行./deploy install zookeeper dptst 和 ./deploy install hdfs dptst-example 即使集群中部署了zk和hdfs,也会自动安装一个zk和hdfs吗?

[email protected]

发件人: 勇幸
发送时间: 2014-05-15 09:35
收件人: XiaoMi/minos
抄送: xiaoguanyu
主题: Re: [minos] hdfs-dptst-example.cfg配置文件相关参数问题咨询 (#27)
会根据base_port的设置生成hadoop的配置文件,rpc端口对应着base_port

2014-05-15 9:07 GMT+08:00 xiaoguanyu [email protected]:

文件目录:monos/config/conf/hdfs/hdfs-dptst-example.cfg
[journalnode]
base_port=12100
host.0=10.38.11.59
host.1=10.38.11.134
host.2=10.38.11.135
[namenode]
base_port=12200
host.0=10.38.11.59
host.1=10.38.11.134
[zkfc]
base_port=12300
[datanode]
base_port=12400
host.0=10.38.11.134
host.1=10.38.11.135
请问这些参数中base_port的设置是同hadoop的配置文件中的journalnode、namenode等的rpc端口号相同,还是自己随意定义的


Reply to this email directly or view it on GitHubhttps://github.com//issues/27
.


Reply to this email directly or view it on GitHub.

@YxAc
Copy link
Member

YxAc commented May 15, 2014

你的hadoop2是minos部署的?使用minos生成的hadoop配置文件会放在集群机器上job的run路径下,例如/home/work/app/hdfs/dptst-example/journalnode/hdfs-site.xml
client的install命令只是将hadoop包上传到Tank服务器上;在bootstrap的时候,各个production machine会从tank上拉取包;集群中是否部署zk和hdfs与install没有直接关系,只是必须先install将包上传,然后才可以bootstrap

@xiaoguanyu
Copy link
Author

我的hadoop2是自己部署的,我理解错了,我以为是执行install就会自动安装hadoop,我将client/supervisor_client.py中的
self.proxy
self.service
self.cluster
self.job
self.instance_id
都打印出来了,总是报连接错误
[root@namenode client]# ./deploy bootstrap hdfs dptst-example
2014-05-15 10:38:59 You should set a bootstrap password, it will be requried when you do cleanup
Set a password manually? (y/n) y
Please input your password:
2014-05-15 10:39:03 Your password is: 123456, you should store this in a safe place, because this is the verification code used to do cleanup
<ServerProxy for root:[email protected]:9001/RPC2>
hdfs
dptst-example
journalnode
-1
<ServerProxy for root:[email protected]:9001/RPC2>
hdfs
dptst-example
namenode
-1
<ServerProxy for root:[email protected]:9001/RPC2>
hdfs
dptst-example
namenode
-1
<ServerProxy for root:[email protected]:9001/RPC2>
hdfs
dptst-example
namenode
-1
<ServerProxy for root:[email protected]:9001/RPC2>
hdfs
dptst-example
namenode
-1
<ServerProxy for root:[email protected]:9001/RPC2>
hdfs
dptst-example
datanode
-1
Traceback (most recent call last):
File "/usr/local/test/minos/client/deploy.py", line 284, in
main()
File "/usr/local/test/minos/client/deploy.py", line 281, in main
return args.handler(args)
File "/usr/local/test/minos/client/deploy.py", line 229, in process_command_bootstrap
return deploy_tool.bootstrap(args)
File "/usr/local/test/minos/client/deploy_hdfs.py", line 238, in bootstrap
bootstrap_job(args, hosts[host_id].ip, job_name, host_id, instance_id, first, cleanup_token)
File "/usr/local/test/minos/client/deploy_hdfs.py", line 201, in bootstrap_job
args.hdfs_config.parse_generated_config_files(args, job_name, host_id, instance_id)
File "/usr/local/test/minos/client/service_config.py", line 665, in parse_generated_config_files
args, self.cluster, self.jobs, current_job, host_id, instance_id))
File "/usr/local/test/minos/client/service_config.py", line 652, in parse_generated_files
file_dict[key] = ServiceConfig.parse_item(args, cluster, jobs, current_job, host_id, instance_id, value)
File "/usr/local/test/minos/client/service_config.py", line 596, in parse_item
new_item.append(callback(args, cluster, jobs, current_job, host_id, instance_id, reg_expr[iter]))
File "/usr/local/test/minos/client/service_config.py", line 255, in get_section_attribute
return get_specific_dir(host.ip, args.service, cluster.name, section, section_instance_id, attribute)
File "/usr/local/test/minos/client/service_config.py", line 185, in get_specific_dir
return ",".join(supervisor_client.get_available_data_dirs())
File "/usr/local/test/minos/client/supervisor_client.py", line 30, in get_available_data_dirs
self.cluster, self.job)
File "/usr/local/lib/python2.7/xmlrpclib.py", line 1224, in call
return self.__send(self.__name, args)
File "/usr/local/lib/python2.7/xmlrpclib.py", line 1578, in __request
verbose=self.__verbose
File "/usr/local/lib/python2.7/xmlrpclib.py", line 1264, in request
return self.single_request(host, handler, request_body, verbose)
File "/usr/local/lib/python2.7/xmlrpclib.py", line 1292, in single_request
self.send_content(h, request_body)
File "/usr/local/lib/python2.7/xmlrpclib.py", line 1439, in send_content
connection.endheaders(request_body)
File "/usr/local/lib/python2.7/httplib.py", line 969, in endheaders
self._send_output(message_body)
File "/usr/local/lib/python2.7/httplib.py", line 829, in _send_output
self.send(msg)
File "/usr/local/lib/python2.7/httplib.py", line 791, in send
self.connect()
File "/usr/local/lib/python2.7/httplib.py", line 772, in connect
self.timeout, self.source_address)
File "/usr/local/lib/python2.7/socket.py", line 571, in create_connection
raise err
socket.error: [Errno 111] Connection refused

[email protected]

发件人: 勇幸
发送时间: 2014-05-15 10:15
收件人: XiaoMi/minos
抄送: xiaoguanyu
主题: Re: [minos] hdfs-dptst-example.cfg配置文件相关参数问题咨询 (#27)
你的hadoop2是minos部署的?使用minos生成的hadoop配置文件会放在集群机器上job的run路径下,例如/home/work/app/hdfs/dptst-example/journalnode/hdfs-site.xml
client的install命令只是将hadoop包上传到Tank服务器上;在bootstrap的时候,各个production machine会从tank上拉取包;集群中是否部署zk和hdfs与install没有直接关系,只是必须先install将包上传,然后才可以bootstrap

Reply to this email directly or view it on GitHub.

@YxAc
Copy link
Member

YxAc commented May 15, 2014

连接错误,请问你10.38.11.59:9001能访问不?部署前需要先部署好Tank,并且需要在所有的production machines上部署supervisord

@xiaoguanyu
Copy link
Author

您好,请问是不是所有部署supervisord的production machines上也都要部署tank

[email protected]

发件人: 勇幸
发送时间: 2014-05-15 11:41
收件人: XiaoMi/minos
抄送: xiaoguanyu
主题: Re: [minos] hdfs-dptst-example.cfg配置文件相关参数问题咨询 (#27)
连接错误,请问你10.38.11.59:9001能访问不?部署前需要先部署好Tank,并且需要在所有的production machines上部署supervisord

Reply to this email directly or view it on GitHub.

@wuzesheng
Copy link
Contributor

tank只需要一个就可以,supervisord需要在每以机器上布署,详细参考readme.md里的架构图

@xiaoguanyu
Copy link
Author

我只在10.38.11.59的机器上部署了tank和supervisord,我在10.38.11.8只部署了supervisord,两台机器共享59上的tank,10.38.11.8的supervisor的访问状态如图2所示,显示的失败的两条信息是什么意思?
10.38.11.59:9001
图1:

图2:10.38.11.8:9001

[email protected]

发件人: Zesheng Wu
发送时间: 2014-05-15 15:32
收件人: XiaoMi/minos
抄送: xiaoguanyu
主题: Re: [minos] hdfs-dptst-example.cfg配置文件相关参数问题咨询 (#27)
tank只需要一个就可以,supervisord需要在每以机器上布署,详细参考readme.md里的架构图

Reply to this email directly or view it on GitHub.

@wuzesheng
Copy link
Contributor

看不到图

@xiaoguanyu
Copy link
Author

我只在10.38.11.59的机器上部署了tank和supervisord,我在10.38.11.8只部署了supervisord,两台机器共享59上的tank,10.38.11.8的supervisor的访问状态如图2所示,显示的失败的两条信息是什么意思?请查看附件中图,谢谢
59tu是10.38.11.59supervisor 状态图
8tu是10.38.11.8supervisor 状态图

[email protected]

发件人: Zesheng Wu
发送时间: 2014-05-15 15:47
收件人: XiaoMi/minos
抄送: xiaoguanyu
主题: Re: [minos] hdfs-dptst-example.cfg配置文件相关参数问题咨询 (#27)
看不到图

Reply to this email directly or view it on GitHub.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants