diff --git a/README.md b/README.md index 501d1e4d664d..5e02bd1151b7 100644 --- a/README.md +++ b/README.md @@ -7,38 +7,30 @@ [![PyPI version](https://badge.fury.io/py/xgboost.svg)](https://pypi.python.org/pypi/xgboost/) [![Gitter chat for developers at https://gitter.im/dmlc/xgboost](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/dmlc/xgboost?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) -|[Documentation](https://xgboost.readthedocs.org)| [Resources](demo/README.md) | [Installation](https://xgboost.readthedocs.org/en/latest/build.html)| +[Documentation](https://xgboost.readthedocs.org) | +[Resources](demo/README.md) | +[Installation](https://xgboost.readthedocs.org/en/latest/build.html) | [Release Notes](NEWS.md)| +[RoadMap](https://github.com/dmlc/xgboost/issues/873) -XGBoost is an optimized distributed gradient boosting library designed to be highly *efficient*, *flexible* and *portable*. +XGBoost is an optimized distributed gradient boosting library designed to be highly ***efficient***, ***flexible*** and ***portable***. It implements machine learning algorithms under the [Gradient Boosting](https://en.wikipedia.org/wiki/Gradient_boosting) framework. XGBoost provides a parallel tree boosting(also known as GBDT, GBM) that solve many data science problems in a fast and accurate way. The same code runs on major distributed environment(Hadoop, SGE, MPI) and can solve problems beyond billions of examples. -XGBoost is part of [DMLC](http://dmlc.github.io/) projects. What's New ---------- * [XGBoost brick](NEWS.md) Release -Features --------- -* Easily accessible through CLI, [python](https://github.com/dmlc/xgboost/blob/master/demo/guide-python/basic_walkthrough.py), - [R](https://github.com/dmlc/xgboost/blob/master/R-package/demo/basic_walkthrough.R), - [Julia](https://github.com/antinucleon/XGBoost.jl/blob/master/demo/basic_walkthrough.jl) -* Its fast! Benchmark numbers comparing xgboost, H20, Spark, R - [benchm-ml numbers](https://github.com/szilard/benchm-ml) -* Memory efficient - Handles sparse matrices, supports external memory -* Accurate prediction, and used extensively by data scientists and kagglers - [highlight links](https://github.com/dmlc/xgboost/blob/master/doc/README.md#highlight-links) -* Distributed version runs on Hadoop (YARN), MPI, SGE etc., scales to billions of examples. - -Bug Reporting -------------- +Ask a Question +-------------- * For reporting bugs please use the [xgboost/issues](https://github.com/dmlc/xgboost/issues) page. -* For generic questions or to share your experience using xgboost please use the [XGBoost User Group](https://groups.google.com/forum/#!forum/xgboost-user/) +* For generic questions for to share your experience using xgboost please use the [XGBoost User Group](https://groups.google.com/forum/#!forum/xgboost-user/) Contributing to XGBoost ----------------------- XGBoost has been developed and used by a group of active community members. Everyone is more than welcome to contribute. It is a way to make the project better and more accessible to more users. -* Check out [Feature Wish List](https://github.com/dmlc/xgboost/labels/Wish-List) to see what can be improved, or open an issue if you want something. +* Check out [call for contributions](https://github.com/dmlc/xgboost/issues?q=is%3Aissue+is%3Aclosed+label%3Acall-for-contribution) and [Roadmap](https://github.com/dmlc/xgboost/issues/873) to see what can be improved, or open an issue if you want something. * Contribute to the [documents and examples](https://github.com/dmlc/xgboost/blob/master/doc/) to share your experience with other users. * Please add your name to [CONTRIBUTORS.md](CONTRIBUTORS.md) and after your patch has been merged. - Please also update [NEWS.md](NEWS.md) on changes and improvements in API and docs. diff --git a/src/cli_main.cc b/src/cli_main.cc index 720a3b1855a6..9ece093dac61 100644 --- a/src/cli_main.cc +++ b/src/cli_main.cc @@ -213,7 +213,9 @@ void CLITrain(const CLIParam& param) { LOG(CONSOLE) << res; } } - if (param.save_period != 0 && (i + 1) % param.save_period == 0) { + if (param.save_period != 0 && + (i + 1) % param.save_period == 0 && + rabit::GetRank() == 0) { std::ostringstream os; os << param.model_dir << '/' << std::setfill('0') << std::setw(4) @@ -233,7 +235,8 @@ void CLITrain(const CLIParam& param) { } // always save final round if ((param.save_period == 0 || param.num_round % param.save_period != 0) && - param.model_out != "NONE") { + param.model_out != "NONE" && + rabit::GetRank() == 0) { std::ostringstream os; if (param.model_out == "NULL") { os << param.model_dir << '/'