Federated Learning faces significant challenges arising from two critical uncertainties: the validity of a client’s participation, which can be compromised by network and system heterogeneity, and the utility of the data contributed by each client, which varies due to heterogeneous statistical data. Traditional client selection methods often treat these uncertainties as a whole, leading to suboptimal performance. To address this issue, we propose FedSUV, an innovative client selection framework that decouples validity and utility uncertainties. FedSUV approaches client selection from a multi-objective optimization perspective, employing advanced bandit algorithms: a confidence bound-based linear contextual bandit for assessing validity and a Gaussian Process bandit for evaluating utility. We validate the effectiveness of FedSUV through both theoretical analysis and large-scale experiments conducted within our physical cluster.