We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
"name": "hdfswriter", "parameter": { "defaultFS": "hdfs://cluster", "hadoopConfig": { "dfs.nameservices": "cluster", "dfs.ha.namenodes.cluster": "nn1,nn2", "dfs.namenode.rpc-address.cluster.nn1": "ha01:8020", "dfs.namenode.rpc-address.cluster.nn2": "ha04:8020", "dfs.client.failover.proxy.provider.cluster": "org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider" }, "fieldDelimiter": "\u0001", "fileName": "data", "fileType": "orc", "path": "/apps/hive/warehouse/mlg.db/ad_info_tao", "writeMode": "truncate",
The text was updated successfully, but these errors were encountered:
/etc/hosts配一下
Sorry, something went wrong.
同问。hdfsreader上面那样的配置,报相同的错
我已经解决了,把hdfs-site.xml,core-site.xml,hive-site.xml三个文件放到hdfswriter.jar文件里面去
用winrar把hdfs-site.xml,core-site.xml,hive-site.xml三个文件压缩到datax/plugin/reader/hdfsreader/hdfsreader-0.0.1-SNAPSHOT.jar里面, 感谢🙏@lijufeng2016
有效,hdfs-site.xml,core-site.xml,hive-site.xml可以从cloudrea manager 中下载hive 客户端配置
No branches or pull requests
"name": "hdfswriter",
"parameter": {
"defaultFS": "hdfs://cluster",
"hadoopConfig": {
"dfs.nameservices": "cluster",
"dfs.ha.namenodes.cluster": "nn1,nn2",
"dfs.namenode.rpc-address.cluster.nn1": "ha01:8020",
"dfs.namenode.rpc-address.cluster.nn2": "ha04:8020",
"dfs.client.failover.proxy.provider.cluster": "org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider"
},
"fieldDelimiter": "\u0001",
"fileName": "data",
"fileType": "orc",
"path": "/apps/hive/warehouse/mlg.db/ad_info_tao",
"writeMode": "truncate",
The text was updated successfully, but these errors were encountered: